A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Sparse EEG/MEG source estimation via a group lasso
Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor
2017-01-01
Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790
New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms
NASA Astrophysics Data System (ADS)
Hunek, Wojciech P.; Wach, Łukasz
2017-10-01
In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.
Full waveform inversion using envelope-based global correlation norm
NASA Astrophysics Data System (ADS)
Oh, Ju-Won; Alkhalifah, Tariq
2018-05-01
To increase the feasibility of full waveform inversion on real data, we suggest a new objective function, which is defined as the global correlation of the envelopes of modelled and observed data. The envelope-based global correlation norm has the advantage of the envelope inversion that generates artificial low-frequency information, which provides the possibility to recover long-wavelength structure in an early stage. In addition, the envelope-based global correlation norm maintains the advantage of the global correlation norm, which reduces the sensitivity of the misfit to amplitude errors so that the performance of inversion on real data can be enhanced when the exact source wavelet is not available and more complex physics are ignored. Through the synthetic example for 2-D SEG/EAGE overthrust model with inaccurate source wavelet, we compare the performance of four different approaches, which are the least-squares waveform inversion, least-squares envelope inversion, global correlation norm and envelope-based global correlation norm. Finally, we apply the envelope-based global correlation norm on the 3-D Ocean Bottom Cable (OBC) data from the North Sea. The envelope-based global correlation norm captures the strong reflections from the high-velocity caprock and generates artificial low-frequency reflection energy that helps us recover long-wavelength structure of the model domain in the early stages. From this long-wavelength model, the conventional global correlation norm is sequentially applied to invert for higher-resolution features of the model.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
Inversion of Magnetic Measurements of the CHAMP Satellite Over the Pannonian Basin
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, P. T.; Wittmann, G.; Toronyi, B.; Puszta, S.
2011-01-01
The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5 x 0.5, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude were downward continued to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We INTERPRET THAT the magnetic anomaly WAS produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao
2016-04-01
Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
NASA Astrophysics Data System (ADS)
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
EEG-distributed inverse solutions for a spherical head model
NASA Astrophysics Data System (ADS)
Riera, J. J.; Fuentes, M. E.; Valdés, P. A.; Ohárriz, Y.
1998-08-01
The theoretical study of the minimum norm solution to the MEG inverse problem has been carried out in previous papers for the particular case of spherical symmetry. However, a similar study for the EEG is remarkably more difficult due to the very complicated nature of the expression relating the voltage differences on the scalp to the primary current density (PCD) even for this simple symmetry. This paper introduces the use of the electric lead field (ELF) on the dyadic formalism in the spherical coordinate system to overcome such a drawback using an expansion of the ELF in terms of longitudinal and orthogonal vector fields. This approach allows us to represent EEG Fourier coefficients on a 2-sphere in terms of a current multipole expansion. The choice of a suitable basis for the Hilbert space of the PCDs on the brain region allows the current multipole moments to be related by spatial transfer functions to the PCD spectral coefficients. Properties of the most used distributed inverse solutions are explored on the basis of these results. Also, a part of the ELF null space is completely characterized and those spherical components of the PCD which are possible silent candidates are discussed.
Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J
2004-03-01
Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann
2011-10-07
The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.
ERIC Educational Resources Information Center
Garcia-Quintana, Roan A.; Mappus, M. Lynne
1980-01-01
Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)
NASA Astrophysics Data System (ADS)
Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki
2015-06-01
Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.
Correlation between the norm and the geometry of minimal networks
NASA Astrophysics Data System (ADS)
Laut, I. L.
2017-05-01
The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.
Special relativity derived from spacetime magma.
Greensite, Fred
2014-01-01
We present a derivation of relativistic spacetime largely untethered from specific physical considerations, in constrast to the many physically-based derivations that have appeared in the last few decades. The argument proceeds from the inherent magma (groupoid) existing on the union of spacetime frame components [Formula: see text] and Euclidean [Formula: see text] which is consistent with an "inversion symmetry" constraint from which the Minkowski norm results. In this context, the latter is also characterized as one member of a class of "inverse norms" which play major roles with respect to various unital [Formula: see text]-algebras more generally.
Solution of underdetermined systems of equations with gridded a priori constraints.
Stiros, Stathis C; Saltogianni, Vasso
2014-01-01
The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
Linking Life Skills and Norms with Adolescent Substance Use and Delinquency in South Africa
ERIC Educational Resources Information Center
Lai, Mary H.; Graham, John W.; Caldwell, Linda L.; Smith, Edward A.; Bradley, Stephanie A.; Vergnani, Tania; Mathews, Cathy; Wegner, Lisa
2013-01-01
We examined factors targeted in two popular prevention approaches with adolescent drug use and delinquency in South Africa. We hypothesized adolescent life skills to be inversely related and perceived norms to be directly related to later drug use and delinquency. Multiple regression and a relative weights approach were conducted for each outcome…
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
NASA Astrophysics Data System (ADS)
Kunze, Herb; La Torre, Davide; Lin, Jianyi
2017-01-01
We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography
Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.
2012-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906
An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.
Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco
2017-04-01
In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.
Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.
Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce
2018-06-15
A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
NASA Astrophysics Data System (ADS)
Hasanov, Alemdar; Erdem, Arzu
2008-08-01
The inverse problem of determining the unknown coefficient of the non-linear differential equation of torsional creep is studied. The unknown coefficient g = g({xi}2) depends on the gradient{xi} : = |{nabla}u| of the solution u(x), x [isin] {Omega} [sub] Rn, of the direct problem. It is proved that this gradient is bounded in C-norm. This permits one to choose the natural class of admissible coefficients for the considered inverse problem. The continuity in the norm of the Sobolev space H1({Omega}) of the solution u(x;g) of the direct problem with respect to the unknown coefficient g = g({xi}2) is obtained in the following sense: ||u(x;g) - u(x;gm)||1 [->] 0 when gm({eta}) [->] g({eta}) point-wise as m [->] {infty}. Based on these results, the existence of a quasi-solution of the inverse problem in the considered class of admissible coefficients is obtained. Numerical examples related to determination of the unknown coefficient are presented.
Regularized magnetotelluric inversion based on a minimum support gradient stabilizing functional
NASA Astrophysics Data System (ADS)
Xiang, Yang; Yu, Peng; Zhang, Luolei; Feng, Shaokong; Utada, Hisashi
2017-11-01
Regularization is used to solve the ill-posed problem of magnetotelluric inversion usually by adding a stabilizing functional to the objective functional that allows us to obtain a stable solution. Among a number of possible stabilizing functionals, smoothing constraints are most commonly used, which produce spatially smooth inversion results. However, in some cases, the focused imaging of a sharp electrical boundary is necessary. Although past works have proposed functionals that may be suitable for the imaging of a sharp boundary, such as minimum support and minimum gradient support (MGS) functionals, they involve some difficulties and limitations in practice. In this paper, we propose a minimum support gradient (MSG) stabilizing functional as another possible choice of focusing stabilizer. In this approach, we calculate the gradient of the model stabilizing functional of the minimum support, which affects both the stability and the sharp boundary focus of the inversion. We then apply the discrete weighted matrix form of each stabilizing functional to build a unified form of the objective functional, allowing us to perform a regularized inversion with variety of stabilizing functionals in the same framework. By comparing the one-dimensional and two-dimensional synthetic inversion results obtained using the MSG stabilizing functional and those obtained using other stabilizing functionals, we demonstrate that the MSG results are not only capable of clearly imaging a sharp geoelectrical interface but also quite stable and robust. Overall good performance in terms of both data fitting and model recovery suggests that this stabilizing functional is effective and useful in practical applications.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Li, Keqiang; Gao, Feng; Li, Shengbo Eben; Zheng, Yang; Gao, Hongbo
2017-12-01
This study presents a distributed H-infinity control method for uncertain platoons with dimensionally and structurally unknown interaction topologies provided that the associated topological eigenvalues are bounded by a predesigned range.With an inverse model to compensate for nonlinear powertrain dynamics, vehicles in a platoon are modeled by third-order uncertain systems with bounded disturbances. On the basis of the eigenvalue decomposition of topological matrices, we convert the platoon system to a norm-bounded uncertain part and a diagonally structured certain part by applying linear transformation. We then use a common Lyapunov method to design a distributed H-infinity controller. Numerically, two linear matrix inequalities corresponding to the minimum and maximum eigenvalues should be solved. The resulting controller can tolerate interaction topologies with eigenvalues located in a certain range. The proposed method can also ensure robustness performance and disturbance attenuation ability for the closed-loop platoon system. Hardware-in-the-loop tests are performed to validate the effectiveness of our method.
Carroll, Suzanne J; Niyonsenga, Theo; Coffee, Neil T; Taylor, Anne W; Daniel, Mark
2018-05-18
Descriptive norms (what other people do) relate to individual-level dietary behaviour and health outcome including overweight and obesity. Descriptive norms vary across residential areas but the impact of spatial variation in norms on individual-level diet and health is poorly understood. This study assessed spatial associations between local descriptive norms for overweight/obesity and insufficient fruit intake (spatially-specific local prevalence), and individual-level dietary intakes (fruit, vegetable and sugary drinks) and 10-year change in body mass index (BMI) and glycosylated haemoglobin (HbA 1c ). HbA 1c and BMI were clinically measured three times over 10 years for a population-based adult cohort (n = 4056) in Adelaide, South Australia. Local descriptive norms for both overweight/obesity and insufficient fruit intake specific to each cohort participant were calculated as the prevalence of these factors, constructed from geocoded population surveillance data aggregated for 1600 m road-network buffers centred on cohort participants' residential addresses. Latent growth models estimated the effect of local descriptive norms on dietary behaviours and change in HbA 1c and BMI, accounting for spatial clustering and covariates (individual-level age, sex, smoking status, employment and education, and area-level median household income). Local descriptive overweight/obesity norms were associated with individual-level fruit intake (inversely) and sugary drink consumption (positively), and worsening HbA 1c and BMI. Spatially-specific local norms for insufficient fruit intake were associated with individual-level fruit intake (inversely) and sugary drink consumption (positively) and worsening HbA 1c but not change in BMI. Individual-level fruit and vegetable intakes were not associated with change in HbA 1c or BMI. Sugary drink consumption was also not associated with change in HbA 1c but rather with increasing BMI. Adverse local descriptive norms for overweight/obesity and insufficient fruit intake are associated with unhealthful dietary intakes and worsening HbA 1c and BMI. As such, spatial variation in lifestyle-related norms is an important consideration relevant to the design of population health interventions. Adverse local norms influence health behaviours and outcomes and stand to inhibit the effectiveness of traditional intervention efforts not spatially tailored to local population characteristics. Spatially targeted social de-normalisation strategies for regions with high levels of unhealthful norms may hold promise in concert with individual, environmental and policy intervention approaches.
Opposite effects on facial morphology due to gene dosage sensitivity.
Hammond, Peter; McKee, Shane; Suttie, Michael; Allanson, Judith; Cobben, Jan-Maarten; Maas, Saskia M; Quarrell, Oliver; Smith, Ann C M; Lewis, Suzanne; Tassabehji, May; Sisodiya, Sanjay; Mattina, Teresa; Hennekam, Raoul
2014-09-01
Sequencing technology is increasingly demonstrating the impact of genomic copy number variation (CNV) on phenotypes. Opposing variation in growth, head size, cognition and behaviour is known to result from deletions and reciprocal duplications of some genomic regions. We propose normative inversion of face shape, opposing difference from a matched norm, as a basis for investigating the effects of gene dosage on craniofacial development. We use dense surface modelling techniques to match any face (or part of a face) to a facial norm of unaffected individuals of matched age, sex and ethnicity and then we reverse the individual's face shape differences from the matched norm to produce the normative inversion. We demonstrate for five genomic regions, 4p16.3, 7q11.23, 11p15, 16p13.3 and 17p11.2, that such inversion for individuals with a duplication or (epi)-mutation produces facial forms remarkably similar to those associated with a deletion or opposite (epi-)mutation of the same region, and vice versa. The ability to visualise and quantify face shape effects of gene dosage is of major benefit for determining whether a CNV is the cause of the phenotype of an individual and for predicting reciprocal consequences. It enables face shape to be used as a relatively simple and inexpensive functional analysis of the gene(s) involved.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
The woodcock reading mastery test: impact of normative changes.
Pae, Hye Kyeong; Wise, Justin C; Cirino, Paul T; Sevcik, Rose A; Lovett, Maureen W; Wolf, Maryanne; Morris, Robin D
2005-09-01
This study examined the magnitude of differences in standard scores, convergent validity, and concurrent validity when an individual's performance was gauged using the revised and the normative update (Woodcock, 1998) editions of the Woodcock Reading Mastery Test in which the actual test items remained identical but norms have been updated. From three metropolitan areas, 899 first to third grade students referred by their teachers for a reading intervention program participated. Results showed the inverse Flynn effect, indicating systematic inflation averaging 5 to 9 standard score points, regardless of gender, IQ, city site, or ethnicity, when calculated using the updated norms. Inflation was greater at lower raw score levels. Implications for using the updated norms for identifying children with reading disabilities and changing norms during an ongoing study are discussed.
Path planning for robotic truss assembly
NASA Technical Reports Server (NTRS)
Sanderson, Arthur C.
1993-01-01
A new Potential Fields approach to the robotic path planning problem is proposed and implemented. Our approach, which is based on one originally proposed by Munger, computes an incremental joint vector based upon attraction to a goal and repulsion from obstacles. By repetitively adding and computing these 'steps', it is hoped (but not guaranteed) that the robot will reach its goal. An attractive force exerted by the goal is found by solving for the the minimum norm solution to the linear Jacobian equation. A repulsive force between obstacles and the robot's links is used to avoid collisions. Its magnitude is inversely proportional to the distance. Together, these forces make the goal the global minimum potential point, but local minima can stop the robot from ever reaching that point. Our approach improves on a basic, potential field paradigm developed by Munger by using an active, adaptive field - what we will call a 'flexible' potential field. Active fields are stronger when objects move towards one another and weaker when they move apart. An adaptive field's strength is individually tailored to be just strong enough to avoid any collision. In addition to the local planner, a global planning algorithm helps the planner to avoid local field minima by providing subgoals. These subgoals are based on the obstacles which caused the local planner to fail. A best-first search algorithm A* is used for graph search.
Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie
2013-05-01
Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.
NASA Astrophysics Data System (ADS)
Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei
2017-02-01
The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.
Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane
2016-01-01
Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.
Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric
2014-01-01
Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044
Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.
NASA Astrophysics Data System (ADS)
Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.
2017-12-01
This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.
Data inversion immune to cycle-skipping using AWI
NASA Astrophysics Data System (ADS)
Guasch, L.; Warner, M.; Umpleby, A.; Yao, G.; Morgan, J. V.
2014-12-01
Over the last decade, 3D Full Waveform Inversion (FWI) has become a standard model-building tool in exploration seismology, especially in oil and gas applications -thanks to the high quality (spatial density of sources and receivers) datasets acquired by the industry. FWI provides superior quantitative images than its travel-time counterparts (travel-time based inversion methods) because it aims to match all the information in the observations instead of a severely restricted subset of them, namely picked arrivals.The downside is that the solution space explored by FWI has a high number of local minima, and since the solution is restricted to local optimization methods (due to the objective function evaluation cost), the success of the inversion is subject to starting within the basin of attraction of the global minimum.Local minima can exist for a wide variety of reasons, and it seems unlikely that a formulation of the problem that can eliminate all of them -by defining the optimization problem in a form that results in a monotonic objective function- exist. However, a significant amount of local minima are created by the definition of data misfit. In its standard formulation FWI compares observed data (field data) with predicted data (generated with a synthetic model) by subtracting one from the other, and the objective function is defined as some norm of this difference. The combination of this criteria and the fact that seismic data is oscillatory produces the well-known phenomenon of cycle-skipping, where model updates try to match nearest cycles from one dataset to the other.In order to avoid cycle-skipping we propose a different comparison between observed and predicted data, based on Wiener filters, which exploits the fact that the "identity" Wiener filter is a spike at zero lag. This gives rise to a new objective function without cycle-skipped related local minima, and therefore suppress the need of accurate starting models or low frequencies in the data. This new technique, called Adaptive Waveform Inversion (AWI) appears always superior to conventional FWI.
Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.
2007-01-01
The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.
Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)
2001-01-01
A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Perceptual dehumanization of faces is activated by norm violations and facilitates norm enforcement.
Fincher, Katrina M; Tetlock, Philip E
2016-02-01
This article uses methods drawn from perceptual psychology to answer a basic social psychological question: Do people process the faces of norm violators differently from those of others--and, if so, what is the functional significance? Seven studies suggest that people process these faces different and the differential processing makes it easier to punish norm violators. Studies 1 and 2 use a recognition-recall paradigm that manipulated facial-inversion and spatial frequency to show that people rely upon face-typical processing less when they perceive norm violators' faces. Study 3 uses a facial composite task to demonstrate that the effect is actor dependent, not action dependent, and to suggest that configural processing is the mechanism of perceptual change. Studies 4 and 5 use offset faces to show that configural processing is only attenuated when they belong to perpetrators who are culpable. Studies 6 and 7 show that people find it easier to punish inverted faces and harder to punish faces displayed in low spatial frequency. Taken together, these data suggest a bidirectional flow of causality between lower-order perceptual and higher-order cognitive processes in norm enforcement. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning
NASA Astrophysics Data System (ADS)
Zuberi, M. AH; Pratt, R. G.
2018-04-01
The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.
Gerdes, Zachary T; Levant, Ronald F
2018-03-01
The Conformity to Masculine Norms Inventory (CMNI) is a widely used multidimensional scale. Studies using the CMNI most often report only total scale scores, which are predominantly associated with negative outcomes. Various studies since the CMNI's inception in 2003 using subscales have reported both positive and negative outcomes. The current content analysis examined studies ( N = 17) correlating the 11 subscales with 63 criterion variables across 7 categories. Most findings were consistent with past research using total scale scores that reported negative outcomes. For example, conformity to masculine norms has been inversely related to help-seeking and positively correlated with concerning health variables, such as substance use. Nonetheless, past reliance on total scores has obscured the complexity of associations with the CMNI in that 30% of the findings in the present study reflected positive outcomes, particularly for health promotion. Subscales differed in their relationships with various outcomes: for one subscale they were predominantly positive, but six others were mostly negative. The situational and contextual implications of conformity to masculine norms and their relationships to positive and negative outcomes are discussed.
Men’s Perspectives on Women’s Empowerment and Intimate Partner Violence in Rural Bangladesh
Schuler, Sidney Ruth; Lenzi, Rachel; Badal, Shamsul Huda; Nazneen, Sohela
2017-01-01
Intimate partner violence (IPV) may increase as women in patriarchal societies become empowered, implicitly or explicitly challenging prevailing gender norms. Prior evidence suggests an inverse U-shaped relationship between women’s empowerment and IPV, in which violence against women first increases and then decreases as more egalitarian gender norms gradually gain acceptance. By means of focus group discussions and in-depth interviews with men in 10 Bangladeshi villages, this study explores men’s evolving views of women, gender norms and the legitimacy of men’s perpetration of IPV in the context of a gender transition. It examines men’s often-contradictory narratives about women’s empowerment and concomitant changes in norms of masculinity, and identifies aspects of women’s empowerment that are most likely to provoke a male backlash. The findings suggest that men’s growing acceptance of egalitarian gender norms and their self-reported decreased engagement in IPV are driven largely by pragmatic self-interest: their desire to improve their economic status and fear of negative consequences of IPV. PMID:28594292
1984-04-01
5.15) where a is a positive constant and 11 IIH the Hilbert space norm associated with the chosen covariance function K. The constant a is arbitrary...Density Anomalies 14 5. Unknown Densities - Geophysical Inversion 16 6. Density Modelling Using Rectangular Prisms 24 6.1 Space Domain 24 6.2 Frequency...theory: to calculate the gravity potential and its derivatives in space due to 6 • given density distributions. When the prime interest is in "external
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
Spectral filtering of gradient for l2-norm frequency-domain elastic waveform inversion
NASA Astrophysics Data System (ADS)
Oh, Ju-Won; Min, Dong-Joo
2013-05-01
To enhance the robustness of the l2-norm elastic full-waveform inversion (FWI), we propose a denoise function that is incorporated into single-frequency gradients. Because field data are noisy and modelled data are noise-free, the denoise function is designed based on the ratio of modelled data to field data summed over shots and receivers. We first take the sums of the modelled data and field data over shots, then take the sums of the absolute values of the resultant modelled data and field data over the receivers. Due to the monochromatic property of wavefields at each frequency, signals in both modelled and field data tend to be cancelled out or maintained, whereas certain types of noise, particularly random noise, can be amplified in field data. As a result, the spectral distribution of the denoise function is inversely proportional to the ratio of noise to signal at each frequency, which helps prevent the noise-dominant gradients from contributing to model parameter updates. Numerical examples show that the spectral distribution of the denoise function resembles a frequency filter that is determined by the spectrum of the signal-to-noise (S/N) ratio during the inversion process, with little human intervention. The denoise function is applied to the elastic FWI of synthetic data, with three types of random noise generated by the modified version of the Marmousi-2 model: white, low-frequency and high-frequency random noises. Based on the spectrum of S/N ratios at each frequency, the denoise function mainly suppresses noise-dominant single-frequency gradients, which improves the inversion results at the cost of spatial resolution.
Gerdes, Zachary T.; Levant, Ronald F.
2017-01-01
The Conformity to Masculine Norms Inventory (CMNI) is a widely used multidimensional scale. Studies using the CMNI most often report only total scale scores, which are predominantly associated with negative outcomes. Various studies since the CMNI’s inception in 2003 using subscales have reported both positive and negative outcomes. The current content analysis examined studies (N = 17) correlating the 11 subscales with 63 criterion variables across 7 categories. Most findings were consistent with past research using total scale scores that reported negative outcomes. For example, conformity to masculine norms has been inversely related to help-seeking and positively correlated with concerning health variables, such as substance use. Nonetheless, past reliance on total scores has obscured the complexity of associations with the CMNI in that 30% of the findings in the present study reflected positive outcomes, particularly for health promotion. Subscales differed in their relationships with various outcomes: for one subscale they were predominantly positive, but six others were mostly negative. The situational and contextual implications of conformity to masculine norms and their relationships to positive and negative outcomes are discussed. PMID:29219033
Lac, Andrew; Alvaro, Eusebio M; Crano, William D; Siegel, Jason T
2009-03-01
Despite research indicating that effective parenting plays an important protective role in adolescent risk behaviors, few studies have applied theory to examine this link with marijuana use, especially with national data. In the current study (N = 2,141), we hypothesized that parental knowledge (of adolescent activities and whereabouts) and parental warmth are antecedents of adolescents' marijuana beliefs-attitudes, subjective norms, and perceived behavioral control-as posited by the Theory of Planned Behavior (TPB; Ajzen 1991). These three types of beliefs were hypothesized to predict marijuana intention, which in turn was hypothesized to predict marijuana consumption. Results of confirmatory factor analyses corroborated the psychometric properties of the two-factor parenting structure as well as the five-factor structure of the TPB. Further, the proposed integrative predictive framework, estimated with a latent structural equation model, was largely supported. Parental knowledge inversely predicted pro-marijuana attitudes, subjective norms, and perceived behavioral control; parental warmth inversely predicted pro-marijuana attitudes and subjective norms, ps < .001. Marijuana intention (p < .001), but not perceived behavioral control, predicted marijuana use 1 year later. In households with high parental knowledge, parental warmth also was perceived to be high (r = .54, p < .001). Owing to the analysis of nationally representative data, results are generalizable to the United States population of adolescents 12-18 years of age.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Special Relativity Derived from Spacetime Magma
Greensite, Fred
2014-01-01
We present a derivation of relativistic spacetime largely untethered from specific physical considerations, in constrast to the many physically-based derivations that have appeared in the last few decades. The argument proceeds from the inherent magma (groupoid) existing on the union of spacetime frame components and Euclidean which is consistent with an “inversion symmetry” constraint from which the Minkowski norm results. In this context, the latter is also characterized as one member of a class of “inverse norms” which play major roles with respect to various unital -algebras more generally. PMID:24959889
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Psychometric Properties of the Positive Automatic Thoughts Questionnaire.
ERIC Educational Resources Information Center
Ingram, Rick E.; And Others
1995-01-01
Original data and other studies using the Positive Automatic Thoughts Questionnaire (ATP-Q) show that the reliability and norms of the instrument appear stable and that the ATP-Q is inversely associated with negative affective states but unrelated to conditions such as medical condition not accompanied by psychological distress. (SLD)
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Lidar measurements of mesospheric temperature inversion at a low latitude
NASA Astrophysics Data System (ADS)
Siva Kumar, V.; Bhavani Kumar, Y.; Raghunath, K.; Rao, P. B.; Krishnaiah, M.; Mizutani, K.; Aoki, T.; Yasui, M.; Itabe, T.
2001-08-01
The Rayleigh lidar data collected on 119 nights from March 1998 to February 2000 were used to study the statistical characteristics of the low latitude mesospheric temperature inversion observed over Gadanki (13.5° N, 79.2° E), India. The occurrence frequency of the inversion showed semiannual variation with maxima in the equinoxes and minima in the summer and winter, which was quite different from that reported for the mid-latitudes. The peak of the inversion layer was found to be confined to the height range of 73 to 79 km with the maximum occurrence centered around 76 km, with a weak seasonal dependence that fits well to an annual cycle with a maximum in June and a minimum in December. The magnitude of the temperature deviation associated with the inversion was found to be as high as 32 K, with the most probable value occurring at about 20 K. Its seasonal dependence seems to follow an annual cycle with a maximum in April and a minimum in October. The observed characteristics of the inversion layer are compared with that of the mid-latitudes and discussed in light of the current understanding of the source mechanisms.
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.
Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S
2014-04-01
Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.
Universal inverse power-law distribution for temperature and rainfall in the UK region
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-06-01
Meteorological parameters, such as temperature, rainfall, pressure, etc., exhibit selfsimilar space-time fractal fluctuations generic to dynamical systems in nature such as fluid flows, spread of forest fires, earthquakes, etc. The power spectra of fractal fluctuations display inverse power-law form signifying long-range correlations. A general systems theory model predicts universal inverse power-law form incorporating the golden mean for the fractal fluctuations. The model predicted distribution was compared with observed distribution of fractal fluctuations of all size scales (small, large and extreme values) in the historic month-wise temperature (maximum and minimum) and total rainfall for the four stations Oxford, Armagh, Durham and Stornoway in the UK region, for data periods ranging from 92 years to 160 years. For each parameter, the two cumulative probability distributions, namely cmax and cmin starting from respectively maximum and minimum data value were used. The results of the study show that (i) temperature distributions (maximum and minimum) follow model predicted distribution except for Stornowy, minimum temperature cmin. (ii) Rainfall distribution for cmin follow model predicted distribution for all the four stations. (iii) Rainfall distribution for cmax follows model predicted distribution for the two stations Armagh and Stornoway. The present study suggests that fractal fluctuations result from the superimposition of eddy continuum fluctuations.
On the structure of critical energy levels for the cubic focusing NLS on star graphs
NASA Astrophysics Data System (ADS)
Adami, Riccardo; Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
2012-05-01
We provide information on a non-trivial structure of phase space of the cubic nonlinear Schrödinger (NLS) on a three-edge star graph. We prove that, in contrast to the case of the standard NLS on the line, the energy associated with the cubic focusing Schrödinger equation on the three-edge star graph with a free (Kirchhoff) vertex does not attain a minimum value on any sphere of constant L2-norm. We moreover show that the only stationary state with prescribed L2-norm is indeed a saddle point.
Paschall, Mallie J; Ringwalt, Chris; Wyatt, Todd; Dejong, William
2014-04-01
The authors investigated possible mediating effects of psychosocial variables (perceived drinking norms, positive and negative alcohol expectancies, personal approval of alcohol use, protective behavioral strategies) targeted by an online alcohol education course (AlcoholEdu for College) as part of a 30-campus randomized trial with 2,400 first-year students. Previous multilevel analyses have found significant effects of the AlcoholEdu course on the frequency of past-30-day alcohol use and binge drinking during the fall semester, and the most common types of alcohol-related problems. Exposure to the online AlcoholEdu course was inversely related to perceived drinking norms but was not related to any of the other psychosocial variables. Multilevel analyses indicated at least partial mediating effects of perceived drinking norms on behavioral outcomes. Findings of this study suggest that AlcoholEdu for College affects alcohol use and related consequences indirectly through its effect on student perceptions of drinking norms. Further research is needed to better understand why this online course did not appear to affect other targeted psychosocial variables.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
NASA Astrophysics Data System (ADS)
Hren, Rok
1998-06-01
Using computer simulations, we systematically investigated the limitations of an inverse solution that employs the potential distribution on the epicardial surface as an equivalent source model in localizing pre-excitation sites in Wolff-Parkinson-White syndrome. A model of the human ventricular myocardium that features an anatomically accurate geometry, an intramural rotating anisotropy and a computational implementation of the excitation process based on electrotonic interactions among cells, was used to simulate body surface potential maps (BSPMs) for 35 pre-excitation sites positioned along the atrioventricular ring. Two individualized torso models were used to account for variations in torso boundaries. Epicardial potential maps (EPMs) were computed using the L-curve inverse solution. The measure for accuracy of the localization was the distance between a position of the minimum in the inverse EPMs and the actual site of pre-excitation in the ventricular model. When the volume conductor properties and lead positions of the torso were precisely known and the measurement noise was added to the simulated BSPMs, the minimum in the inverse EPMs was at 12 ms after the onset on average within
cm of the pre-excitation site. When the standard torso model was used to localize the sites of onset of the pre-excitation sequence initiated in individualized male and female torso models, the mean distance between the minimum and the pre-excitation site was
cm for the male torso and
cm for the female torso. The findings of our study indicate that a location of the minimum in EPMs computed using the inverse solution can offer non-invasive means for pre-interventional planning of the ablative treatment.
Iwamoto, Derek Kenji; Corbin, William; Lejuez, Carl; MacPherson, Laura
2015-01-01
College men are more likely to engage in health-compromising behaviors including risky drinking behavior, and experience more alcohol-related problems, including violence and arrest, as compared to women. The study of masculine norms or societal expectations, defined as beliefs and values about what it means to be a man, is one promising area of investigation that may help explain within-group differences and differential rates of alcohol use among men. Using the gender social learning model, we investigated the role of positive alcohol expectancies as an underlying mediator between masculine norms and alcohol use among college men. Data from 804 college adult men (Mean age = 20.43) were collected through a web-based assessment. Participants completed a self-report measure of binge drinking, frequency of drinking, quantity of drinks, conformity to masculine norms, and positive alcohol expectancies measures. Structural equation modeling was used to examine relations between masculine norms, alcohol expectancies and alcohol use. The masculine norms of “Playboy” and Risk-Taking were positively related to heavy alcohol use, while Emotional Control and Heterosexual Presentation were both negatively associated with alcohol use, after controlling for fraternity Greek status and positive expectancies. Playboy and Winning norms were positively associated with positive expectancies while Power Over Women was inversely related to positive expectancies which, in turn, were associated with heavier alcohol use. This study was a novel exploration into the multiple pathways and mediators through which positive alcohol expectancies may help explain and provide specificity to the masculinity and alcohol use relationship among college men. PMID:25705133
Iwamoto, Derek Kenji; Corbin, William; Lejuez, Carl; MacPherson, Laura
2014-01-01
College men are more likely to engage in health-compromising behaviors including risky drinking behavior, and experience more alcohol-related problems, including violence and arrest, as compared to women. The study of masculine norms or societal expectations, defined as beliefs and values about what it means to be a man, is one promising area of investigation that may help explain within-group differences and differential rates of alcohol use among men. Using the gender social learning model, we investigated the role of positive alcohol expectancies as an underlying mediator between masculine norms and alcohol use among college men. Data from 804 college adult men ( Mean age = 20.43) were collected through a web-based assessment. Participants completed a self-report measure of binge drinking, frequency of drinking, quantity of drinks, conformity to masculine norms, and positive alcohol expectancies measures. Structural equation modeling was used to examine relations between masculine norms, alcohol expectancies and alcohol use. The masculine norms of "Playboy" and Risk-Taking were positively related to heavy alcohol use, while Emotional Control and Heterosexual Presentation were both negatively associated with alcohol use, after controlling for fraternity Greek status and positive expectancies. Playboy and Winning norms were positively associated with positive expectancies while Power Over Women was inversely related to positive expectancies which, in turn, were associated with heavier alcohol use. This study was a novel exploration into the multiple pathways and mediators through which positive alcohol expectancies may help explain and provide specificity to the masculinity and alcohol use relationship among college men.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.
Lande, Russell
2009-07-01
Adaptation to a sudden extreme change in environment, beyond the usual range of background environmental fluctuations, is analysed using a quantitative genetic model of phenotypic plasticity. Generations are discrete, with time lag tau between a critical period for environmental influence on individual development and natural selection on adult phenotypes. The optimum phenotype, and genotypic norms of reaction, are linear functions of the environment. Reaction norm elevation and slope (plasticity) vary among genotypes. Initially, in the average background environment, the character is canalized with minimum genetic and phenotypic variance, and no correlation between reaction norm elevation and slope. The optimal plasticity is proportional to the predictability of environmental fluctuations over time lag tau. During the first generation in the new environment the mean fitness suddenly drops and the mean phenotype jumps towards the new optimum phenotype by plasticity. Subsequent adaptation occurs in two phases. Rapid evolution of increased plasticity allows the mean phenotype to closely approach the new optimum. The new phenotype then undergoes slow genetic assimilation, with reduction in plasticity compensated by genetic evolution of reaction norm elevation in the original environment.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R
2007-03-01
The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
The roles of outlet density and norms in alcohol use disorder.
Ahern, Jennifer; Balzer, Laura; Galea, Sandro
2015-06-01
Alcohol outlet density and norms shape alcohol consumption. However, due to analytic challenges we do not know: (a) if alcohol outlet density and norms also shape alcohol use disorder, and (b) whether they act in combination to shape disorder. We applied a new targeted minimum loss-based estimator for rare outcomes (rTMLE) to a general population sample from New York City (N = 4000) to examine the separate and combined relations of neighborhood alcohol outlet density and norms around drunkenness with alcohol use disorder. Alcohol use disorder was assessed using the World Mental Health Comprehensive International Diagnostic Interview (WMH-CIDI) alcohol module. Confounders included demographic and socioeconomic characteristics, as well as history of drinking prior to residence in the current neighborhood. Alcohol use disorder prevalence was 1.78%. We found a marginal risk difference for alcohol outlet density of 0.88% (95% CI 0.00-1.77%), and for norms of 2.05% (95% CI 0.89-3.21%), adjusted for confounders. While each exposure had a substantial relation with alcohol use disorder, there was no evidence of additive interaction between the exposures. Results indicate that the neighborhood environment shapes alcohol use disorder. Despite the lack of additive interaction, each exposure had a substantial relation with alcohol use disorder and our findings suggest that alteration of outlet density and norms together would likely be more effective than either one alone. Important next steps include development and testing of multi-component intervention approaches aiming to modify alcohol outlet density and norms toward reducing alcohol use disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
ERIC Educational Resources Information Center
Halpern, Arthur M.; Ramachandran, B. R.; Glendening, Eric D.
2007-01-01
A report is presented to describe how students can be empowered to construct the full, double minimum inversion potential for ammonia by performing intrinsic reaction coordinate calculations. This work can be associated with the third year physical chemistry lecture laboratory or an upper level course in computational chemistry.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Mecke, Ann-Christine; Sundberg, Johan; Granqvist, Svante; Echternach, Matthias
2012-01-01
The closed quotient, i.e., the ratio between the closed phase and the period, is commonly studied in voice research. However, the term may refer to measures derived from different methods, such as inverse filtering, electroglottography or high-speed digital imaging (HSDI). This investigation compares closed quotient data measured by these three methods in two boy singers. Each singer produced sustained tones on two different pitches and a glissando. Audio, electroglottographic signal (EGG), and HSDI were recorded simultaneously. The audio signal was inverse filtered by means of the decap program; the closed phase was defined as the flat minimum portion of the flow glottogram. Glottal area was automatically measured in the high speed images by the built-in camera software, and the closed phase was defined as the flat minimum portion of the area-signal. The EGG-signal was analyzed in four different ways using the matlab open quotient interface. The closed quotient data taken from the EGG were found to be considerably higher than those obtained from inverse filtering. Also, substantial differences were found between the closed quotient derived from HSDI and those derived from inverse filtering. The findings illustrate the importance of distinguishing between these quotients. © 2012 Acoustical Society of America.
An inverse dynamics approach to trajectory optimization for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
Seismic and gravity signature of the Ischia Island Caldera (Italy)
NASA Astrophysics Data System (ADS)
Capuano, P.; de Matteis, R.; Russo, G.
2009-04-01
The Campania (Italy) coasts are characterized by the presence of several volcanoes. The island of Ischia, located at the northwestern end of the Gulf of Naples, belongs to the Neapolitan Volcanic District together with Phlegrean Fields and Vesuvius, having all these Pleistocene volcanoes erupted in historical times, and it is characterized by diffuse hydrothermal phenomena The island represents the emergent part of a more extensive volcanic area developed mainly westward of the island, with underwater volcanoes aligned along regional fault patterns. The activity of Ischia volcano is testified by the occurrence of eruptions in historical times, the presence of intense hydrothermal phenomena, and by seismic activity (e.g. the 1883 Casamicciola earthquake). Ischia is populated by about 50,000 inhabitants increasing, mainly in the summer, due to thriving tourism business, partially due to its active volcanic state. Hazard assessment at active, densely populated volcanoes is critically based on knowledge of the volcanoes past behavior and the definition of its present state. As a contribution to the definition of the present state of the Ischia island volcano, we obtain a model of the shallow crust using geophysical observables through seismic tomography and 3D gravity inversion. In particular we use travel times collected during the Serapis experiment on the island and its surroundings and free air anomaly. A new 3D gravity inversion procedure has been developed to take better into account the shape and the effects of topography approximating it by a triangular mesh. Below each triangle, a sequence of triangular prisms is built, the uppermost prism having the upper face coincident with the triangle following the topography. The inversion is performed searching for a regularized solution using the minimum norm stabilizer. The main results inferable from the 3D seismic and gravity images are the definition of the caldera rims hypothesize by many authors along the perimeter of the island, with a less evidence on the southern part, and the presence of an high velocity/density area inside the caldera that is consistent with the lateral extension of a resurgent block affecting the most recent dynamic of the island
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Chester J
FORTRAN90 codes for inversion of electrostatic geophysical data in terms of three subsurface parameters in a single-well, oilfield environment: the linear charge density of the steel well casing (L), the point charge associated with an induced fracture filled with a conductive contrast agent (Q) and the location of said fracture (s). Theory is described in detail in Weiss et al. (Geophysics, 2016). Inversion strategy is to loop over candidate fracture locations, and at each one minimize the squared Cartesian norm of the data misfit to arrive at L and Q. Solution method is to construct the 2x2 linear system ofmore » normal equations and compute L and Q algebraically. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed by a simple L-Q-s model. This may include hydrofracking operations, as postulated in Weiss et al. (2016), but no field validation examples have so far been provided.« less
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Carroll, Suzanne J; Paquet, Catherine; Howard, Natasha J; Coffee, Neil T; Adams, Robert J; Taylor, Anne W; Niyonsenga, Theo; Daniel, Mark
2017-02-02
Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA 1c ), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA 1c were moderated by food resource availability. HbA 1c , representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m 2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA 1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA 1c . Neither fast-food nor healthful food resource availability were associated with change in HbA 1c . Greater healthful food resource availability reduced the rate of increase in HbA 1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA 1c . Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA 1c . Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes.
Alcoholism risk moderation by a socio-religious dimension.
Haber, Jon Randolph; Jacob, Theodore
2007-11-01
Religious affiliation is inversely associated with the development of alcohol-dependence symptoms in adolescents, but the mechanisms of this effect are unclear. The degree to which religious affiliations accommodate to or differentiate from cultural values may influence attitudes about alcohol use. We hypothesized that, given permissive cultural norms about alcohol in the United States, if a religious affiliation differentiates itself from cultural norms, then high-risk adolescents (those with parents having a history of alcoholism) would exhibit fewer alcohol-dependence symptoms compared with other affiliations and nonreligious adolescents. A sample of female adolescent offspring (N = 3,582) in Missouri was selected. Parental alcoholism and religious affiliation and their interaction were examined as predictors of offspring alcohol-dependence symptoms. Findings indicated that (1) parental alcohol history robustly predicted increased offspring alcohol-dependence symptoms, (2) religious rearing appeared protective (offspring exhibited fewer alcohol-dependence symptoms), (3) religious differentiation accounted for most of the protective effect, (4) other religious variables did not account for the differentiation effect, and (5) black religious adolescents were more frequently raised with differentiating affiliations and exhibited greater protective effects. Results demonstrate that religious differentiation accounts for most of the protective influence of religious affiliation. This may be because religious differences from cultural norms (that include permissive alcohol norms) counteract these social influences given alternative "higher" religious ideals.
NASA Astrophysics Data System (ADS)
Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.
2014-03-01
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Electronic structure and molecular dynamics of Na2Li
NASA Astrophysics Data System (ADS)
Malcolm, Nathaniel O. J.; McDouall, Joseph J. W.
Following the first report (Mile, B., Sillman, P. D., Yacob, A. R. and Howard, J. A., 1996, J. chem. Soc. Dalton Trans , 653) of the EPR spectrum of the mixed alkali-metal trimer Na2Li a detailed study has been made of the electronic structure and structural dynamics of this species. Two isomeric forms have been found: one of the type, Na-Li-Na, of C , symmetry and another, Li-Na-Na, of C symmetry. Also, there are two linear saddle points which correspond to 'inversion' transition structures, and a saddle point of C symmetry which connects the two minima. A molecular dynamics investigation of these species shows that, at the temperature of the reported experiments (170 K), the C minimum is not 'static', but undergoes quite rapid inversion. At higher temperatures the C minimum converts to the C form, but by a mechanism very different from that suggested by minimum energy path considerations. 2 2v s s 2v 2v s
An Improved Measure of Cognitive Salience in Free Listing Tasks: A Marshallese Example
ERIC Educational Resources Information Center
Robbins, Michael C.; Nolan, Justin M.; Chen, Diana
2017-01-01
A new free-list measure of cognitive salience, B', is presented, which includes both list position and list frequency. It surpasses other extant measures by being normed to vary between a maximum of 1 and a minimum of 0, thereby making it useful for comparisons irrespective of list length or number of respondents. An illustration of its…
Determining genetic erosion in fourteen Picea chihuahuana Martínez populations.
C.Z. Quiñones-Pérez; C. Wehenkel
2017-01-01
Picea chihuahuana is an endemic species in Mexico and is considered endangered, according to the Mexican Official Norm (NOM-ECOL-059-2010). This species covers a total area of no more than 300 ha located in at least 40 sites along the Sierra Madre Occidental in Durango and Chihuahua states. A minimum of 42,600 individuals has been estimated,...
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
Li, Tao; Wang, Jing; Lu, Miao; Zhang, Tianyi; Qu, Xinyun; Wang, Zhezhi
2017-01-01
Due to its sensitivity and specificity, real-time quantitative PCR (qRT-PCR) is a popular technique for investigating gene expression levels in plants. Based on the Minimum Information for Publication of Real-Time Quantitative PCR Experiments (MIQE) guidelines, it is necessary to select and validate putative appropriate reference genes for qRT-PCR normalization. In the current study, three algorithms, geNorm, NormFinder, and BestKeeper, were applied to assess the expression stability of 10 candidate reference genes across five different tissues and three different abiotic stresses in Isatis indigotica Fort. Additionally, the IiYUC6 gene associated with IAA biosynthesis was applied to validate the candidate reference genes. The analysis results of the geNorm, NormFinder, and BestKeeper algorithms indicated certain differences for the different sample sets and different experiment conditions. Considering all of the algorithms, PP2A-4 and TUB4 were recommended as the most stable reference genes for total and different tissue samples, respectively. Moreover, RPL15 and PP2A-4 were considered to be the most suitable reference genes for abiotic stress treatments. The obtained experimental results might contribute to improved accuracy and credibility for the expression levels of target genes by qRT-PCR normalization in I. indigotica. PMID:28702046
An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-09-01
Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
Inverse eigenproblem for R-symmetric matrices and their approximation
NASA Astrophysics Data System (ADS)
Yuan, Yongxin
2009-11-01
Let be a nontrivial involution, i.e., R=R-1[not equal to]±In. We say that is R-symmetric if RGR=G. The set of all -symmetric matrices is denoted by . In this paper, we first give the solvability condition for the following inverse eigenproblem (IEP): given a set of vectors in and a set of complex numbers , find a matrix such that and are, respectively, the eigenvalues and eigenvectors of A. We then consider the following approximation problem: Given an n×n matrix , find such that , where is the solution set of IEP and ||[dot operator]|| is the Frobenius norm. We provide an explicit formula for the best approximation solution by means of the canonical correlation decomposition.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
A three-dimensional gravity inversion applied to São Miguel Island (Azores)
NASA Astrophysics Data System (ADS)
Camacho, A. G.; Montesinos, F. G.; Vieira, R.
1997-04-01
Gravimetric studies are becoming more and more widely acknowledged as a useful tool for studying and modeling the distributions of subsurface masses that are associated with volcanic activity. In this paper, new gravimetric data for the volcanic island of São Miguel (Azores) were analyzed and interpreted by a stabilized linear inversion methodology. An inversion model of higher resolution was calculated for the Caldera of Furnas, which has a larger density of data. In order to filter out the noncorrelatable anomalies, least squares prediction was used, resulting in a correlated gravimetric signal model with an accuracy of the order of 0.9 mGal. The gravimetric inversion technique is based on the adjustment of a three-dimensional (3-D) model of cubes of unknown density that represents the island's subsurface. The problem of non-uniqueness is solved by minimization with appropriate covariance matrices of the data (resulting from the least squares prediction) and of the unknowns. We also propose a criterion for choosing a balance between the data fit (which in this case corresponds to residues with rms of the order of 0.6 mGal) and the smoothness of the solution. The global model of the island includes a low-density zone in a WNW-ESE direction and a depth of the order of 20 km, associated with the Terceira rift spreading center. The minimums located at a depth of 4 km may be associated with shallow magmatic chambers beneath the main volcanoes of the island. The main high-density area is related to the Nordeste basaltic shield. With regard to the Caldera Furnas, in addition to the minimum that can be associated with a magmatic chamber, there are other shallow minimums that correspond to eruptive processes.
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Conditioning of the Stable, Discrete-time Lyapunov Operator
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
The Schatten p-norm condition of the discrete-time Lyapunov operator L(sub A) defined on matrices P is identical with R(sup n X n) by L(sub A) P is identical with P - APA(sup T) is studied for stable matrices A is a member of R(sup n X n). Bounds are obtained for the norm of L(sub A) and its inverse that depend on the spectrum, singular values and radius of stability of A. Since the solution P of the the discrete-time algebraic Lyapunov equation (DALE) L(sub A)P = Q can be ill-conditioned only when either L(sub A) or Q is ill-conditioned, these bounds are useful in determining whether P admits a low-rank approximation, which is important in the numerical solution of the DALE for large n.
Inversion of Airborne Electromagnetic Data: Application to Oil Sands Exploration
NASA Astrophysics Data System (ADS)
Cristall, J.; Farquharson, C. G.; Oldenburg, D. W.
2004-05-01
In general, three-dimensional inversion of airborne electromagnetic data for models of the conductivity variation in the Earth is currently impractical because of the large amount of computation time that it requires. At the other extreme, one-dimensional imaging techniques based on transforming the observed data as a function of measurement time or frequency at each location to values of conductivity as a function of depth are very fast. Such techniques can provide an image that, in many circumstances, is a fair, qualitative representation of the subsurface. However, this is not the same as a model that is known to reproduce the observations to a level considered appropriate for the noise in the data. This makes it hard to assess the quality and reliability of the images produced by the transform techniques until other information such as bore-hole logs is obtained. A compromise between these two interpretation strategies is to retain the approximation of a one-dimensional variation of conductivity beneath each observation location, but to invert the corresponding data as functions of time or frequency, taking advantage of all available aspects of inversion methodology. For example, using an automatic method such as the GCV or L-curve criteria for determining how well to fit a set of data when the actual amount of noise is not known, even when there are clear multi-dimensional effects in the data; using something other than a sum-of-squares measure for the misfit, for example the Huber M-measure, which affords a robust fit to data that contain non-Gaussian noise; and using an l1-norm or similar measure of model structure that enables piecewise constant, blocky models to be constructed. These features, as well as the basic concepts of minimum-structure inversion, result in a flexible and powerful interpretation procedure that, because of the one-dimensional approximation, is sufficiently rapid to be a viable alternative to the imaging techniques presently in use. We provide an example that involves the interpretation of an airborne time-domain electromagnetic data-set from an oil sands exploration project in Alberta. The target is the layer that potentially contains oil sands. This layer is relatively resistive, with its resistivity increasing with increasing hydrocarbon content, and is sandwiched between two more conductive layers. This is quite different from the classical electromagnetic geophysics scenario of looking for a conductive mineral deposit in resistive shield rocks. However, inverting the data enabled the depth, thickness and resistivity of the target layer to be well determined. As a consequence, it is concluded that airborne electromagnetic surveys, when combined with inversion procedures, can be a very cost-effective way of mapping even fairly subtle conductivity variations over large areas.
Mirus, B.B.; Perkins, K.S.; Nimmo, J.R.; Singha, K.
2009-01-01
To understand their relation to pedogenic development, soil hydraulic properties in the Mojave Desert were investi- gated for three deposit types: (i) recently deposited sediments in an active wash, (ii) a soil of early Holocene age, and (iii) a highly developed soil of late Pleistocene age. Eff ective parameter values were estimated for a simplifi ed model based on Richards' equation using a fl ow simulator (VS2D), an inverse algorithm (UCODE-2005), and matric pressure and water content data from three ponded infi ltration experiments. The inverse problem framework was designed to account for the eff ects of subsurface lateral spreading of infi ltrated water. Although none of the inverse problems converged on a unique, best-fi t parameter set, a minimum standard error of regression was reached for each deposit type. Parameter sets from the numerous inversions that reached the minimum error were used to develop probability distribu tions for each parameter and deposit type. Electrical resistance imaging obtained for two of the three infi ltration experiments was used to independently test fl ow model performance. Simulations for the active wash and Holocene soil successfully depicted the lateral and vertical fl uxes. Simulations of the more pedogenically developed Pleistocene soil did not adequately replicate the observed fl ow processes, which would require a more complex conceptual model to include smaller scale heterogeneities. The inverse-modeling results, however, indicate that with increasing age, the steep slope of the soil water retention curve shitis toward more negative matric pressures. Assigning eff ective soil hydraulic properties based on soil age provides a promising framework for future development of regional-scale models of soil moisture dynamics in arid environments for land-management applications. ?? Soil Science Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supardiyono; Santosa, Bagus Jaya; Physics Department, Faculty of Mathematics and Natural Sciences, Sepuluh Nopember Institute of Technology, Surabaya
A one-dimensional (1-D) velocity model and station corrections for the West Java zone were computed by inverting P-wave arrival times recorded on a local seismic network of 14 stations. A total of 61 local events with a minimum of 6 P-phases, rms 0.56 s and a maximum gap of 299 Degree-Sign were selected. Comparison with previous earthquake locations shows an improvement for the relocated earthquakes. Tests were carried out to verify the robustness of inversion results in order to corroborate the conclusions drawn out from our reasearch. The obtained minimum 1-D velocity model can be used to improve routine earthquakemore » locations and represents a further step toward more detailed seismotectonic studies in this area of West Java.« less
United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2
1983-12-01
filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions
L 1-2 minimization for exact and stable seismic attenuation compensation
NASA Astrophysics Data System (ADS)
Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang
2018-06-01
Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.
Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali
2013-04-01
The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
The global rotating scalar field vacuum on anti-de Sitter space-time
NASA Astrophysics Data System (ADS)
Kent, Carl; Winstanley, Elizabeth
2015-01-01
We consider the definition of the global vacuum state of a quantum scalar field on n-dimensional anti-de Sitter space-time as seen by an observer rotating about the polar axis. Since positive (or negative) frequency scalar field modes must have positive (or negative) Klein-Gordon norm respectively, we find that the only sensible choice of positive frequency corresponds to positive frequency as seen by a static observer. This means that the global rotating vacuum is identical to the global nonrotating vacuum. For n ≥ 4, if the angular velocity of the rotating observer is smaller than the inverse of the anti-de Sitter radius of curvature, then modes with positive Klein-Gordon norm also have positive frequency as seen by the rotating observer. We comment on the implications of this result for the construction of global rotating thermal states.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
L1 norm based common spatial patterns decomposition for scalp EEG BCI.
Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong
2013-08-06
Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.
Kim, Minzee; Longhofer, Wesley; Boyle, Elizabeth Heger; Nyseth, Hollie
2014-01-01
Using the case of adolescent fertility, we ask the questions of whether and when national laws have an effect on outcomes above and beyond the effects of international law and global organizing. To answer these questions, we utilize a fixed-effect time-series regression model to analyze the impact of minimum-age-of-marriage laws in 115 poor- and middle-income countries from 1989 to 2007. We find that countries with strict laws setting the minimum age of marriage at 18 experienced the most dramatic decline in rates of adolescent fertility. Trends in countries that set this age at 18 but allowed exceptions (for example, marriage with parental consent) were indistinguishable from countries that had no such minimum-age-of-marriage law. Thus, policies that adhere strictly to global norms are more likely to elicit desired outcomes. The article concludes with a discussion of what national law means in a diffuse global system where multiple actors and institutions make the independent effect of law difficult to identify. PMID:25525281
Spectral factorization of wavefields and wave operators
NASA Astrophysics Data System (ADS)
Rickett, James Edward
Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.
Battjes-Fries, Marieke C E; van Dongen, Ellen J I; Renes, Reint Jan; Meester, Hante J; Van't Veer, Pieter; Haveman-Nies, Annemien
2016-08-05
To unravel the effect of school-based nutrition education, insight into the implementation process is needed. In this study, process indicators of Taste Lessons (a nutrition education programme for Dutch elementary schools) and their association with changes in behavioural determinants relevant to healthy eating behaviour are studied. The study sample consisted of 392 Dutch primary school children from 12 schools. Data were collected using teacher and child questionnaires at baseline, and at one and six months after the intervention. Multilevel regression analyses were conducted to study the association between dose, appreciation and children's engagement in interpersonal communication (talking about Taste Lessons with others after the lessons), and change in knowledge, awareness, skills, attitude, emotion, subjective norm and intention towards two target behaviours. With an average implementation of a third of the programme activities, dose positively predicted change in children's subjective norm of the teacher after one month. Teachers and children highly appreciated Taste Lessons. Whereas teacher appreciation was inversely associated, child appreciation was positively associated with children's change in awareness, emotion and subjective norm of teachers after one month and in attitude and subjective norm of parents after six months. Interpersonal communication was positively associated with children's change in five determinants after one month and in attitude and intention after six months. The implementation process is related to the programme outcomes of Taste Lessons. Process data provide valuable insights into factors that contribute to the effect of interventions in real-life settings.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Buss, James R.; Kopriva, Ivica
2004-04-01
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Cheng, Su-Fen; Lee-Hsieh, Jane; Turton, Michael A; Lin, Kuan-Chia
2014-06-01
Little research has investigated the establishment of norms for nursing students' self-directed learning (SDL) ability, recognized as an important capability for professional nurses. An item response theory (IRT) approach was used to establish norms for SDL abilities valid for the different nursing programs in Taiwan. The purposes of this study were (a) to use IRT with a graded response model to reexamine the SDL instrument, or the SDLI, originally developed by this research team using confirmatory factor analysis and (b) to establish SDL ability norms for the four different nursing education programs in Taiwan. Stratified random sampling with probability proportional to size was used. A minimum of 15% of students from the four different nursing education degree programs across Taiwan was selected. A total of 7,879 nursing students from 13 schools were recruited. The research instrument was the 20-item SDLI developed by Cheng, Kuo, Lin, and Lee-Hsieh (2010). IRT with the graded response model was used with a two-parameter logistic model (discrimination and difficulty) for the data analysis, calculated using MULTILOG. Norms were established using percentile rank. Analysis of item information and test information functions revealed that 18 items exhibited very high discrimination and two items had high discrimination. The test information function was higher in this range of scores, indicating greater precision in the estimate of nursing student SDL. Reliability fell between .80 and .94 for each domain and the SDLI as a whole. The total information function shows that the SDLI is appropriate for all nursing students, except for the top 2.5%. SDL ability norms were established for each nursing education program and for the nation as a whole. IRT is shown to be a potent and useful methodology for scale evaluation. The norms for SDL established in this research will provide practical standards for nursing educators and students in Taiwan.
Pseudo paths towards minimum energy states in network dynamics
NASA Astrophysics Data System (ADS)
Hedayatifar, L.; Hassanibesheli, F.; Shirazi, A. H.; Vasheghani Farahani, S.; Jafari, G. R.
2017-10-01
The dynamics of networks forming on Heider balance theory moves towards lower tension states. The condition derived from this theory enforces agents to reevaluate and modify their interactions to achieve equilibrium. These possible changes in network's topology can be considered as various paths that guide systems to minimum energy states. Based on this theory the final destination of a system could reside on a local minimum energy, ;jammed state;, or the global minimum energy, balanced states. The question we would like to address is whether jammed states just appear by chance? Or there exist some pseudo paths that bound a system towards a jammed state. We introduce an indicator to suspect the location of a jammed state based on the Inverse Participation Ratio method (IPR). We provide a margin before a local minimum where the number of possible paths dramatically drastically decreases. This is a condition that proves adequate for ending up on a jammed states.
The inverse problem in electroencephalography using the bidomain model of electrical activity.
Lopez Rincon, Alejandro; Shimoda, Shingo
2016-12-01
Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
The Lanchester square-law model extended to a (2,2) conflict
NASA Astrophysics Data System (ADS)
Colegrave, R. K.; Hyde, J. M.
1993-01-01
A natural extension of the Lanchester (1,1) square-law model is the (M,N) linear model in which M forces oppose N forces with constant attrition rates. The (2,2) model is treated from both direct and inverse viewpoints. The inverse problem means that the model is to be fitted to a minimum number of observed force levels, i.e. the attrition rates are to be found from the initial force levels together with the levels observed at two subsequent times. An approach based on Hamiltonian dynamics has enabled the authors to derive a procedure for solving the inverse problem, which is readily computerized. Conflicts in which participants unexpectedly rally or weaken must be excluded.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
ERIC Educational Resources Information Center
Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec
2011-01-01
We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…
Time-domain wavefield reconstruction inversion
NASA Astrophysics Data System (ADS)
Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan
2017-12-01
Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.
Inverse free steering law for small satellite attitude control and power tracking with VSCMGs
NASA Astrophysics Data System (ADS)
Malik, M. S. I.; Asghar, Sajjad
2014-01-01
Recent developments in integrated power and attitude control systems (IPACSs) for small satellite, has opened a new dimension to more complex and demanding space missions. This paper presents a new inverse free steering approach for integrated power and attitude control systems using variable-speed single gimbal control moment gyroscope. The proposed inverse free steering law computes the VSCMG steering commands (gimbal rates and wheel accelerations) such that error signal (difference in command and output) in feedback loop is driven to zero. H∞ norm optimization approach is employed to synthesize the static matrix elements of steering law for a static state of VSCMG. Later these matrix elements are suitably made dynamic in order for the adaptation. In order to improve the performance of proposed steering law while passing through a singular state of CMG cluster (no torque output), the matrix element of steering law is suitably modified. Therefore, this steering law is capable of escaping internal singularities and using the full momentum capacity of CMG cluster. Finally, two numerical examples for a satellite in a low earth orbit are simulated to test the proposed steering law.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
NASA Astrophysics Data System (ADS)
Jahandari, H.; Farquharson, C. G.
2017-11-01
Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.
American Sign Language/English bilingual model: a longitudinal study of academic growth.
Lange, Cheryl M; Lane-Outlaw, Susan; Lange, William E; Sherwood, Dyan L
2013-10-01
This study examines reading and mathematics academic growth of deaf and hard-of-hearing students instructed through an American Sign Language (ASL)/English bilingual model. The study participants were exposed to the model for a minimum of 4 years. The study participants' academic growth rates were measured using the Northwest Evaluation Association's Measure of Academic Progress assessment and compared with a national-normed group of grade-level peers that consisted primarily of hearing students. The study also compared academic growth for participants by various characteristics such as gender, parents' hearing status, and secondary disability status and examined the academic outcomes for students after a minimum of 4 years of instruction in an ASL/English bilingual model. The findings support the efficacy of the ASL/English bilingual model.
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-06-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Recovering an elastic obstacle containing embedded objects by the acoustic far-field measurements
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2018-01-01
Consider the inverse scattering problem of time-harmonic acoustic waves by a 3D bounded elastic obstacle which may contain embedded impenetrable obstacles inside. We propose a novel and simple technique to show that the elastic obstacle can be uniquely recovered by the acoustic far-field pattern at a fixed frequency, disregarding its contents. Our method is based on constructing a well-posed modified interior transmission problem on a small domain and makes use of an a priori estimate for both the acoustic and elastic wave fields in the usual H 1-norm. In the case when there is no obstacle embedded inside the elastic body, our method gives a much simpler proof for the uniqueness result obtained previously in the literature (Natroshvili et al 2000 Rend. Mat. Serie VII 20 57-92 Monk and Selgas 2009 Inverse Problems Imaging 3 173-98).
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
An inverse problem of determining the implied volatility in option pricing
NASA Astrophysics Data System (ADS)
Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu
2008-04-01
In the Black-Scholes world there is the important quantity of volatility which cannot be observed directly but has a major impact on the option value. In practice, traders usually work with what is known as implied volatility which is implied by option prices observed in the market. In this paper, we use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. The issue is converted into a terminal control problem by Green function method. The existence and uniqueness of the minimum of the control functional are addressed by the optimal control method, and the necessary condition which must be satisfied by the minimum is also given. The results obtained in the paper may be useful for those who engage in risk management or volatility trading.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
Optimal impulsive time-fixed orbital rendezvous and interception with path constraints
NASA Technical Reports Server (NTRS)
Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.
1990-01-01
Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.
THE EFFECT OF AUTOMOTIVE FUEL CONSERVATION MEASURES ON AIR POLLUTION
A number of policies have been designed to reduce gasoline consumption by automobiles, including: gasoline rationing; increases in the federal excise tax on gasoline; excise taxes on new cars, in inverse proportion to their fuel economy; and regulations to set minimum levels on a...
Fraction of exhaled nitric oxide (FeNO ) norms in healthy North African children 5-16 years old.
Rouatbi, Sonia; Alqodwa, Ashraf; Ben Mdella, Samia; Ben Saad, Helmi
2013-10-01
(i) To identify factors that influence the FeNO values in healthy North African, Arab children aged 6-16 years; (ii) to test the applicability and reliability of the previously published FeNO norms; and (iii) if needed, to establish FeNO norms in this population, and to prospectively assess its reliability. This was a cross-sectional analytical study. A convenience sample of healthy Tunisian children, aged 6-16 years was recruited. First subjects have responded to two questionnaires, and then FeNO levels were measured by an online method with electrochemical analyzer (Medisoft, Sorinnes [Dinant], Belgium). Anthropometric and spirometric data were collected. Simple and a multiple linear regressions were determined. The 95% confidence interval (95% CI) and upper limit of normal (ULN) were defined. Two hundred eleven children (107 boys) were retained. Anthropometric data, gender, socioeconomic level, obesity or puberty status, and sports activity were not independent influencing variables. Total sample FeNO data appeared to be influenced only by maximum mid expiratory flow (l sec(-1) ; r(2) = 0.0236, P = 0.0516). For boys, only 1st second forced expiratory volume (l) explains a slight (r(2) = 0.0451) but significant FeNO variability (P = 0.0281). For girls, FeNO was not significantly correlated with any children determined data. For North African/Arab children, FeNO values were significantly lower than in other populations and the available published FeNO norms did not reliably predict FeNO in our population. The mean ± SD (95% CI ULN, minimum-maximum) of FeNO (ppb) for the total sample was 5.0 ± 2.9 (5.4, 1.0-17.0). For North African, Arab children of any age, any FeNO value greater than 17.0 ppb may be considered abnormal. Finally, in an additional group of children prospectively assessed, we found no child with a FeNO higher than 17.0 ppb. Our FeNO norms enrich the global repository of FeNO norms the pediatrician can use to choose the most appropriate norms based on children's location or ethnicity. © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.
2013-01-01
With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-11-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
Relationship of type of work with health-related quality of life.
Kawabe, Yuri; Nakamura, Yasuyuki; Kikuchi, Sayuri; Suzukamo, Yoshimi; Murakami, Yoshitaka; Tanaka, Taichiro; Takebayashi, Toru; Okayama, Akira; Miura, Katsuyuki; Okamura, Tomonori; Fukuhara, Shunichi; Ueshima, Hirotsugu
2015-12-01
To examine the relation of work type with health-related quality of life (HRQoL) in healthy workers. We cross-sectionally examined 4427 (3605 men and 822 women) healthy workers in Japan, aged 19-69 years. We assessed HRQoL based on scores for five scales of the SF-36. Multiple regression was applied to examine the relation of work type (nighttime, shift, day to night, and daytime) with the five HRQoL norm-based scores, lower scores of which indicate poorer health status, adjusted for confounding factors, including sleeping duration. Shiftwork was inversely related to role physical [regression estimate (β) = -2.12, 95 % confidence intervals (CI) -2.94, -1.30, P < 0.001], general health (β = -1.37, 95 % CI -2.01, -0.72, P < 0.001), role emotional (β = -1.24, 95% CI -1.98, -0.50, P < 0.001), and mental health (β = -1.31, 95% CI -2.01, -0.63, P < 0.001) independent of confounding factors, but not to vitality. Day-to-nighttime work was inversely related to all the five HRQoL subscales (Ps 0.012 to <0.001). Shiftwork was significantly inversely related to four out of the five HRQoL, except for vitality, and day-to-nighttime work was significantly inversely related to all five HRQoL, independent of demographic and lifestyle factors.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Graph properties of synchronized cortical networks during visual working memory maintenance.
Palva, Satu; Monto, Simo; Palva, J Matias
2010-02-15
Oscillatory synchronization facilitates communication in neuronal networks and is intimately associated with human cognition. Neuronal activity in the human brain can be non-invasively imaged with magneto- (MEG) and electroencephalography (EEG), but the large-scale structure of synchronized cortical networks supporting cognitive processing has remained uncharacterized. We combined simultaneous MEG and EEG (MEEG) recordings with minimum-norm-estimate-based inverse modeling to investigate the structure of oscillatory phase synchronized networks that were active during visual working memory (VWM) maintenance. Inter-areal phase-synchrony was quantified as a function of time and frequency by single-trial phase-difference estimates of cortical patches covering the entire cortical surfaces. The resulting networks were characterized with a number of network metrics that were then compared between delta/theta- (3-6 Hz), alpha- (7-13 Hz), beta- (16-25 Hz), and gamma- (30-80 Hz) frequency bands. We found several salient differences between frequency bands. Alpha- and beta-band networks were more clustered and small-world like but had smaller global efficiency than the networks in the delta/theta and gamma bands. Alpha- and beta-band networks also had truncated-power-law degree distributions and high k-core numbers. The data converge on showing that during the VWM-retention period, human cortical alpha- and beta-band networks have a memory-load dependent, scale-free small-world structure with densely connected core-like structures. These data further show that synchronized dynamic networks underlying a specific cognitive state can exhibit distinct frequency-dependent network structures that could support distinct functional roles. Copyright 2009 Elsevier Inc. All rights reserved.
Hardebeck, J.L.; Michael, A.J.
2006-01-01
We present a new focal mechanism stress inversion technique to produce regional-scale models of stress orientation containing the minimum complexity necessary to fit the data. Current practice is to divide a region into small subareas and to independently fit a stress tensor to the focal mechanisms of each subarea. This procedure may lead to apparent spatial variability that is actually an artifact of overfitting noisy data or nonuniquely fitting data that does not completely constrain the stress tensor. To remove these artifacts while retaining any stress variations that are strongly required by the data, we devise a damped inversion method to simultaneously invert for stress in all subareas while minimizing the difference in stress between adjacent subareas. This method is conceptually similar to other geophysical inverse techniques that incorporate damping, such as seismic tomography. In checkerboard tests, the damped inversion removes the stress rotation artifacts exhibited by an undamped inversion, while resolving sharper true stress rotations than a simple smoothed model or a moving-window inversion. We show an example of a spatially damped stress field for southern California. The methodology can also be used to study temporal stress changes, and an example for the Coalinga, California, aftershock sequence is shown. We recommend use of the damped inversion technique for any study examining spatial or temporal variations in the stress field.
Kinematical synthesis of an inversion of the double linked fourbar for morphing wing applications
NASA Astrophysics Data System (ADS)
Aguirrebeitia, J.; Avilés, R.; Fernández, I.; Abasolo, M.
2013-03-01
This paper presents the kinematical features of an inversion of the double linked fourbar for morphing wing purposes. The structure of the mechanism is obtained using structural synthesis concepts, from an initial conceptual schematic. Then, kinematic characteristics as instant center of rotation, lock positions, dead point positions and uncertainty positions are derived for this mechanism in order to face the last step, the dimensional synthesis; in this sense, two kinds of dimensional synthesis are arranged to guide the wing along two positions, and to fulfill with the second one some aerodynamic and minimum actuation energy related issues.
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
Minimum relative entropy, Bayes and Kapur
NASA Astrophysics Data System (ADS)
Woodbury, Allan D.
2011-04-01
The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.
How Different Marker Sets Affect Joint Angles in Inverse Kinematics Framework.
Mantovani, Giulia; Lamontagne, Mario
2017-04-01
The choice of marker set is a source of variability in motion analysis. Studies exist which assess the performance of marker sets when direct kinematics is used, but these results cannot be extrapolated to the inverse kinematic framework. Therefore, the purpose of this study was to examine the sensitivity of kinematic outcomes to inter-marker set variability in an inverse kinematic framework. The compared marker sets were plug-in-gait, University of Ottawa motion analysis model and a three-marker-cluster marker set. Walking trials of 12 participants were processed in opensim. The coefficient of multiple correlations was very good for sagittal (>0.99) and transverse (>0.92) plane angles, but worsened for the transverse plane (0.72). Absolute reliability indices are also provided for comparison among studies: minimum detectable change values ranged from 3 deg for the hip sagittal range of motion to 16.6 deg of the hip transverse range of motion. Ranges of motion of hip and knee abduction/adduction angles and hip and ankle rotations were significantly different among the three marker configurations (P < 0.001), with plug-in-gait producing larger ranges of motion. Although the same model was used for all the marker sets, the resulting minimum detectable changes were high and clinically relevant, which warns for caution when comparing studies that use different marker configurations, especially if they differ in the joint-defining markers.
2012-12-01
requirements as part of an overall medical support concept In this document several potential CONOPS proposals are added as food for thought (see Chapter 4...safe flight minimums for manned flight; • En route or terminal environment (landing zone) is contaminated by an industrial spill or by a CBRN event...Further, the U.S. Food and Drug Administration (FDA) and other national/international medical regulatory authorities have requirements for portable
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
NASA Astrophysics Data System (ADS)
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
No-Ghost Theorem for Neveu-Schwarz String in 0-Picture
NASA Astrophysics Data System (ADS)
Kohriki, M.; Kunitomo, H.; Murata, M.
2010-12-01
The no-ghost theorem for Neveu-Schwarz string is directly proved in 0-picture. The one-to-one correspondence between physical states in 0-picture and in the conventional (-1)-picture is confirmed. It is shown that a nontrivial metric consistent with the BRST cohomology is needed to define a positive semidefinite norm in the physical Hilbert space. As a by-product, we find a new inverse picture-changing operator, which is noncovariant but has a nonsingular operator product with itself. A possibility to construct a new gauge-invariant superstring field theory is discussed.
Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink
2013-03-21
number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received
NASA Astrophysics Data System (ADS)
Yan, Ping; Kalscheuer, Thomas; Hedin, Peter; Garcia Juanatey, Maria A.
2017-04-01
We present a novel 2-D magnetotelluric (MT) inversion scheme, in which the local weights of the regularizing smoothness constraints are based on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. Successful application of the inversion to MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using the envelope attribute of the COSC reflection seismic profile helped to reduce the uncertainty of the interpretation of the main décollement by demonstrating that the associated alum shales may be much thinner than suggested by a previous inversion model. Thus, the new model supports the proposed location of a future borehole COSC-2 which is hoped to penetrate the main décollement and the underlying Precambrian basement.
NASA Astrophysics Data System (ADS)
Schmidt, Torsten; Heise, Stefan; Wickert, Jens; Haser, Antonia; Cammas, Jean-Pierre; Smit, Herman G. J.
In this study we discuss characteristics of the tropopause inversion layer (TIL) based on two datasets. Temperature measurements from GPS radio occultation (RO) data (CHAMP and GRACE) for the time interval 2001-2009 are used to exhibit seasonal properties of the TIL on a global scale. In agreement with previous studies the vertical structure of the TIL is investigated using the square of the buoyancy frequency N. For the extratropics on both hemispheres N2 has an universal distribution independent from season: a local minimum about 2 km below the lapse rate tropopause height (LRTH), an absolute maximum about 1 km above the LRTH, and a local minimum about 4 km above the LRTH. In the tropics (15° N-15° S) the N2 maximum above the tropopause is 200-300 m higher compared with the extratropics and the local minimum of N2 below the tropopause appears about 4 km below the LRTH. Trace gas measurements onboard commercial aircrafts from 2001-2008 are used as a complementary dataset (MOZAIC program). We demonstrate that the mixing ratio gradients of ozone, carbon monoxide and water vapor are suitable parameters for characterizing the TIL reproducing most of the vertical structure of N2 . We also show that the LRTH is strongly correlated with the absolute maxima of ozone and carbon monoxide mixing ratio gradients.
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Grube, Joel W.; Paschall, Mallie J.
2009-01-01
Strategies to enforce underage drinking laws are aimed at reducing youth access to alcohol from commercial and social sources and deterring its possession and use. However, little is known about the processes through which enforcement strategies may affect underage drinking. The purpose of the current study is to present and test a conceptual model that specifies possible direct and indirect relationships among adolescents’ perception of community alcohol norms, enforcement of underage drinking laws, personal beliefs (perceived parental disapproval of alcohol use, perceived alcohol availability, perceived drinking by peers, perceived harm and personal disapproval of alcohol use), and their past-30-day alcohol use. This study used data from 17,830 middle and high school students who participated in the 2007 Oregon Health Teens Survey. Structural equations modeling indicated that perceived community disapproval of adolescents’ alcohol use was directly and positively related to perceived local police enforcement of underage drinking laws. In addition, adolescents’ personal beliefs appeared to mediate the relationship between perceived enforcement of underage drinking laws and past-30-day alcohol use. Enforcement of underage drinking laws appeared to partially mediate the relationship between perceived community disapproval and personal beliefs related to alcohol use. Results of this study suggests that environmental prevention efforts to reduce underage drinking should target adults’ attitudes and community norms about underage drinking as well as the beliefs of youth themselves. PMID:20135210
Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.
Gong, Changcheng; Cai, Yufang; Zeng, Li
2018-01-01
For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.
Attachment theory and theory of planned behavior: an integrative model predicting underage drinking.
Lac, Andrew; Crano, William D; Berger, Dale E; Alvaro, Eusebio M
2013-08-01
Research indicates that peer and maternal bonds play important but sometimes contrasting roles in the outcomes of children. Less is known about attachment bonds to these 2 reference groups in young adults. Using a sample of 351 participants (18 to 20 years of age), the research integrated two theoretical traditions: attachment theory and theory of planned behavior (TPB). The predictive contribution of both theories was examined in the context of underage adult alcohol use. Using full structural equation modeling, results substantiated the hypotheses that secure peer attachment positively predicted norms and behavioral control toward alcohol, but secure maternal attachment inversely predicted attitudes and behavioral control toward alcohol. Alcohol attitudes, norms, and behavioral control each uniquely explained alcohol intentions, which anticipated an increase in alcohol behavior 1 month later. The hypothesized processes were statistically corroborated by tests of indirect and total effects. These findings support recommendations for programs designed to curtail risky levels of underage drinking using the tenets of attachment theory and TPB. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
4D inversion of time-lapse magnetotelluric data sets for monitoring geothermal reservoir
NASA Astrophysics Data System (ADS)
Nam, Myung Jin; Song, Yoonho; Jang, Hannuree; Kim, Bitnarae
2017-06-01
The productivity of a geothermal reservoir, which is a function of the pore-space and fluid-flow path of the reservoir, varies since the properties of the reservoir changes with geothermal reservoir production. Because the variation in the reservoir properties causes changes in electrical resistivity, time-lapse (TL) three-dimensional (3D) magnetotelluric (MT) methods can be applied to monitor the productivity variation of a geothermal reservoir thanks to not only its sensitivity to the electrical resistivity but also its deep depth of survey penetration. For an accurate interpretation of TL MT-data sets, a four-dimensional (4D) MT inversion algorithm has been developed to simultaneously invert all vintage data considering time-coupling between vintages. However, the changes in electrical resistivity of deep geothermal reservoirs are usually small generating minimum variation in TL MT responses. Maximizing the sensitivity of inversion to the changes in resistivity is critical in the success of 4D MT inversion. Thus, we further developed a focused 4D MT inversion method by considering not only the location of a reservoir but also the distribution of newly-generated fractures during the production. For the evaluation of the 4D MT algorithm, we tested our 4D inversion algorithms using synthetic TL MT-data sets.
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2014-05-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
NASA Technical Reports Server (NTRS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2013-01-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2016-12-01
When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.
Thermodynamical transcription of density functional theory with minimum Fisher information
NASA Astrophysics Data System (ADS)
Nagy, Á.
2018-03-01
Ghosh, Berkowitz and Parr designed a thermodynamical transcription of the ground-state density functional theory and introduced a local temperature that varies from point to point. The theory, however, is not unique because the kinetic energy density is not uniquely defined. Here we derive the expression of the phase-space Fisher information in the GBP theory taking the inverse temperature as the Fisher parameter. It is proved that this Fisher information takes its minimum for the case of constant temperature. This result is consistent with the recently proven theorem that the phase-space Shannon information entropy attains its maximum at constant temperature.
ERIC Educational Resources Information Center
Thorpe, Andy; Snell, Martin; Davey-Evans, Sue; Talman, Richard
2017-01-01
There is an established, if weak, inverse relationship between levels of English language proficiency and academic performance in higher education. In response, higher education institutions (HEIs) insist upon minimum entry requirements concerning language for international applicants. Many HEIs now also offer pre-sessional English courses to…
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
Saddeek, Ali Mohamed
2017-01-01
Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Ultra-Broad-Band Optical Parametric Amplifier or Oscillator
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry; Matsko, Andrey; Savchenkov, Anatolly; Maleki, Lute
2009-01-01
A concept for an ultra-broad-band optical parametric amplifier or oscillator has emerged as a by-product of a theoretical study in fundamental quantum optics. The study was originally intended to address the question of whether the two-photon temporal correlation function of light [in particular, light produced by spontaneous parametric down conversion (SPDC)] can be considerably narrower than the inverse of the spectral width (bandwidth) of the light. The answer to the question was found to be negative. More specifically, on the basis of the universal integral relations between the quantum two-photon temporal correlation and the classical spectrum of light, it was found that the lower limit of two-photon correlation time is set approximately by the inverse of the bandwidth. The mathematical solution for the minimum two-photon correlation time also provides the minimum relative frequency dispersion of the down-converted light components; in turn, the minimum relative frequency dispersion translates to the maximum bandwidth, which is important for the design of an ultra-broad-band optical parametric oscillator or amplifier. In the study, results of an analysis of the general integral relations were applied in the case of an optically nonlinear, frequency-dispersive crystal in which SPDC produces collinear photons. Equations were found for the crystal orientation and pump wavelength, specific for each parametric-down-converting crystal, that eliminate the relative frequency dispersion of collinear degenerate (equal-frequency) signal and idler components up to the fourth order in the frequency-detuning parameter
Kwong, Huey Chong; Sim, Aijia; Chidan Kumar, C S; Then, Li Yee; Win, Yip-Foo; Quah, Ching Kheng; Naveen, S; Warad, Ismail
2017-12-01
The asymmetric unit of the title compound, C 24 H 14 F 4 O 2 , comprises of one and a half mol-ecules; the half-mol-ecule is completed by crystallographic inversion symmetry. In the crystal, mol-ecules are linked into a three-dimensional network by C-H⋯F and C-H⋯O hydrogen bonds. Some of the C-H⋯F links are unusually short (< 2.20 Å). Hirshfeld surface analyses ( d norm surfaces and two-dimensional fingerprint plots) for the title compound are presented and discussed.
On Use of Multi-Chambered Fission Detectors for In-Core, Neutron Spectroscopy
NASA Astrophysics Data System (ADS)
Roberts, Jeremy A.
2018-01-01
Presented is a short, computational study on the potential use of multichambered fission detectors for in-core, neutron spectroscopy. Motivated by the development of very small fission chambers at CEA in France and at Kansas State University in the U.S., it was assumed in this preliminary analysis that devices can be made small enough to avoid flux perturbations and that uncertainties related to measurements can be ignored. It was hypothesized that a sufficient number of chambers with unique reactants can act as a real-time, foilactivation experiment. An unfolding scheme based on maximizing (Shannon) entropy was used to produce a flux spectrum from detector signals that requires no prior information. To test the method, integral, detector responses were generated for singleisotope detectors of various Th, U, Np, Pu, Am, and Cs isotopes using a simplified, pressurized-water reactor spectrum and fluxweighted, microscopic, fission cross sections, in the WIMS-69 multigroup format. An unfolded spectrum was found from subsets of these responses that had a maximum entropy while reproducing the responses considered and summing to one (that is, they were normalized). Several nuclide subsets were studied, and, as expected, the results indicate inclusion of more nuclides leads to better spectra but with diminishing improvements, with the best-case spectrum having an average, relative, group-wise error of approximately 51%. Furthermore, spectra found from minimum-norm and Tihkonov-regularization inversion were of lower quality than the maximum entropy solutions. Finally, the addition of thermal-neutron filters (here, Cd and Gd) provided substantial improvement over unshielded responses alone. The results, as a whole, suggest that in-core, neutron spectroscopy is at least marginally feasible.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Prada, Carlos F; Delprat, Alejandra; Ruiz, Alfredo
2011-02-01
The chromosomal relationships of the four martensis cluster species are among the most complex and intricate within the entire Drosophila repleta group, due to the so-called sharing of inversions. Here, we have revised these relationships using comparative mapping of bacterial artificial chromosome (BAC) clones on the salivary gland chromosomes. A physical map of chromosome 2 of Drosophila uniseta (one of the cluster members) was generated by in situ hybridization of 82 BAC clones from the physical map of the Drosophila buzzatii genome (an outgroup that represents the ancestral arrangement). By comparing the marker positions, we determined the number, order, and orientation of conserved chromosomal segments between chromosome 2 of D. buzzatii and D. uniseta. GRIMM software was used to infer that a minimum of five chromosomal inversions are necessary to transform the chromosome 2 of D. buzzatii into that of D. uniseta. Two of these inversions have been overlooked in previous cytological analyses. The five fixed inversions entail two breakpoint reuses because only nine syntenic segments and eight interruptions were observed. We tested for the presence of the five inversions fixed in D. uniseta in the other three species of the martensis cluster by in situ hybridization of eight breakpoint-bearing BAC clones. The results shed light on the chromosomal phylogeny of the martensis cluster, yet leave a number of questions open.
Liu, Peng; Zhang, Jingxue; Wang, Dunyou
2017-06-07
A double-inversion mechanism of the F - + CH 3 I reaction was discovered in aqueous solution using combined multi-level quantum mechanics theories and molecular mechanics. The stationary points along the reaction path show very different structures to the ones in the gas phase due to the interactions between the solvent and solute, especially strong hydrogen bonds. An intermediate complex, a minimum on the potential of mean force, was found to serve as a connecting-link between the abstraction-induced inversion transition state and the Walden-inversion transition state. The potentials of mean force were calculated with both the DFT/MM and CCSD(T)/MM levels of theory. Our calculated free energy barrier of the abstraction-induced inversion is 69.5 kcal mol -1 at the CCSD(T)/MM level of theory, which agrees with the one at 72.9 kcal mol -1 calculated using the Born solvation model and gas-phase data; and our calculated free energy barrier of the Walden inversion is 24.2 kcal mol -1 , which agrees very well with the experimental value at 25.2 kcal mol -1 in aqueous solution. The calculations show that the aqueous solution makes significant contributions to the potentials of mean force and exerts a big impact on the molecular-level evolution along the reaction pathway.
Reconstructing the duty of water: a study of emergent norms in socio-hydrology
NASA Astrophysics Data System (ADS)
Wescoat, J. L., Jr.
2013-06-01
This paper assesses changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, a line of research useful for anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late-18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for water rights appropriation (e.g., only 40 to 80 acres per cfs). The final section shows that while the duty of water concept has now been eclipsed by other measures and standards of water efficiency, it may have continuing relevance for anticipating if not predicting emerging social values with respect to water.
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
NASA Technical Reports Server (NTRS)
Wang, Tongjiang; Davila, Joseph M.
2014-01-01
Determining the coronal electron density by the inversion of white-light polarized brightness (pB) measurements by coronagraphs is a classic problem in solar physics. An inversion technique based on the spherically symmetric geometry (spherically symmetric inversion, SSI) was developed in the 1950s and has been widely applied to interpret various observations. However, to date there is no study of the uncertainty estimation of this method. We here present the detailed assessment of this method using a three-dimensional (3D) electron density in the corona from 1.5 to 4 solar radius as a model, which is reconstructed by a tomography method from STEREO/COR1 observations during the solar minimum in February 2008 (Carrington Rotation, CR 2066).We first show in theory and observation that the spherically symmetric polynomial approximation (SSPA) method and the Van de Hulst inversion technique are equivalent. Then we assess the SSPA method using synthesized pB images from the 3D density model, and find that the SSPA density values are close to the model inputs for the streamer core near the plane of the sky (POS) with differences generally smaller than about a factor of two; the former has the lower peak but extends more in both longitudinal and latitudinal directions than the latter. We estimate that the SSPA method may resolve the coronal density structure near the POS with angular resolution in longitude of about 50 deg. Our results confirm the suggestion that the SSI method is applicable to the solar minimum streamer (belt), as stated in some previous studies. In addition, we demonstrate that the SSPA method can be used to reconstruct the 3D coronal density, roughly in agreement with the reconstruction by tomography for a period of low solar activity (CR 2066). We suggest that the SSI method is complementary to the 3D tomographic technique in some cases, given that the development of the latter is still an ongoing research effort.
NASA Technical Reports Server (NTRS)
Lei, Shaw-Min; Yao, Kung
1990-01-01
A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Particle tracking by using single coefficient of Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Widjaja, J.; Dawprateep, S.; Chuamchaitrakool, P.; Meemon, P.
2016-11-01
A new method for extracting information from particle holograms by using a single coefficient of Wigner-Ville distribution (WVD) is proposed to obviate drawbacks of conventional numerical reconstructions. Our previous study found that analysis of the holograms by using the WVD gives output coefficients which are mainly confined along a diagonal direction intercepted at the origin of the WVD plane. The slope of this diagonal direction is inversely proportional to the particle position. One of these coefficients always has minimum amplitude, regardless of the particle position. By detecting position of the coefficient with minimum amplitude in the WVD plane, the particle position can be accurately measured. The proposed method is verified through computer simulations.
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Physical activity, but not fitness level, is associated with depression in Australian adults.
Forsyth, A; Williams, P; Deane, F P
2015-01-01
The objective of this study was to evaluate the fitness and physical activity levels of people referred to a nutrition and physical activity program for the management of mental health in general practice. General practitioners referred 109 patients being treated for depression and/or anxiety to a lifestyle intervention program. All participants completed anthropometric measurements and questionnaires including the Depression, Anxiety and Stress Scale (DASS) and the Active Australia Survey. Aerobic fitness was measured with the YMCA step test and muscular fitness was measured with repeated chair stands and arm curls. Fitness scores were compared to population norms, and physical activity levels were compared to population norms and national recommendations. Eighty percent of participants were overweight or obese. A greater proportion of study participants (51%) than the general Australian population (38%) met the recommended 150 minutes per week spent in moderate physical activity. However, participants demonstrated lower than average levels of fitness and participated in low levels of vigorous physical activity. Levels of physical activity, but not fitness, were inversely correlated with DASS scores. Patients presenting with depression and/or anxiety should be screened for physical activity behaviours and encouraged to meet the National Physical Activity Guidelines.
Lp-estimates on diffusion processes
NASA Astrophysics Data System (ADS)
Yan, Litan; Zhu, Bei
2005-03-01
Let be a diffusion process on given by where B=(Bt)t[greater-or-equal, slanted]0 is a standard Brownian motion starting at zero and [mu],[sigma] are two continuous functions on , and [sigma](x)>0 if x[not equal to]0. For a nonnegative continuous function [phi] we define the functional by , t[greater-or-equal, slanted]0. Then under suitable conditions we establish the relationship between Lp-norm of sup0[less-than-or-equals, slant]t[less-than-or-equals, slant][tau]Xt and Lp-norm of J[tau] for all stopping times [tau]. In particular, for a Bessel process Z of dimension [delta]>0 starting at zero, we show that the inequalities hold for all 0
0, where Cp and cp are some positive constants depending only on p, and H[mu],h[mu] are the inverses of x|->(e2[mu]x-2[mu]x-1)/2[mu]2 and x|->(e-2[mu]x+2[mu]x-1)/2[mu]2 on (0,[infinity]), respectively.
Accumulated energy norm for full waveform inversion of marine data
NASA Astrophysics Data System (ADS)
Shin, Changsoo; Ha, Wansoo
2017-12-01
Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.
Sparseness- and continuity-constrained seismic imaging
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.
2005-04-01
Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.
Double-inversion mechanisms of the X⁻ + CH₃Y [X,Y = F, Cl, Br, I] SN2 reactions.
Szabó, István; Czakó, Gábor
2015-03-26
The double-inversion and front-side attack transition states as well as the proton-abstraction channels of the X(-) + CH3Y [X,Y = F, Cl, Br, I] reactions are characterized by the explicitly correlated CCSD(T)-F12b/aug-cc-pVTZ(-PP) level of theory using small-core relativistic effective core potentials and the corresponding aug-cc-pVTZ-PP bases for Br and I. In the X = F case the double-inversion classical(adiabatic) barrier heights are 28.7(25.6), 15.8(13.4), 13.2(11.0), and 8.6(6.6) kcal mol(-1) for Y = F, Cl, Br, and I, respectively, whereas the barrier heights are in the 40-90 kcal mol(-1) range for the other 12 reactions. The abstraction channels are always above the double-inversion saddle points. For X = F, the front-side attack classical(adiabatic) barrier heights, 45.8(44.8), 31.0(30.3), 24.7(24.2), and 19.5(19.3) kcal mol(-1) for Y = F, Cl, Br, and I, respectively, are higher than the corresponding double-inversion ones, whereas for the other systems the front-side attack saddle points are in the 35-70 kcal mol(-1) range. The double-inversion transition states have XH···CH2Y(-) structures with Cs point-group symmetry, and the front-side attack saddle points have either Cs (X = F or X = Y) or C1 symmetry with XCY angles in the 78-88° range. On the basis of the previous reaction dynamics simulations and the minimum energy path computations along the inversion coordinate of selected XH···CH2Y(-) systems, we suggest that the double inversion may be a general mechanism for SN2 reactions.
NASA Astrophysics Data System (ADS)
Schmidt, Torsten; Cammas, Jean-Pierre; Heise, Stefan; Wickert, Jens; Haser, Antonia
2010-05-01
In this study we discuss characteristics of the tropopause inversion layer (TIL) based on two datasets. Temperature measurements from GPS radio occultation (RO) data (CHAMP and GRACE) for the time interval 2001-2009 are used to exhibit seasonal properties of the TIL on a global scale. In agreement with previous studies the vertical structure of the TIL is investigated using the square of the buoyancy frequency N. For the extratropics on both hemispheres N2 has an universal distribution independent from season: a local minimum about 2 km below the lapse rate tropopause height (LRTH), an absolute maximum about 1 km above the LRTH, and a local minimum about 4 km above the LRTH. In the tropics (15°N-15°S) the N2 maximum above the tropopause is 200-300 m higher compared with the extratropics and the local minimum of N2 below the tropopause appears about 4 km below the LRTH. Trace gas measurements onboard commercial aircrafts from 2001-2007 are used as a complementary dataset (MOZAIC program). We demonstrate that the mixing ratio gradients of ozone, carbon monoxide and water vapor are suitable parameters for characterizing the TIL reproducing most of the vertical structure of N2. We also show that the LRTH is strongly correlated with the absolute maxima of ozone and carbon monoxide mixing ratio gradients. Mean deviations of the heights of the absolute maxima of mixing ratio gradients from O3 and CO to the LRTH are (-0.02±1.51) km and (-0.35±1.28) km, respectively.
Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M
1999-05-20
A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.
Design of optimally normal minimum gain controllers by continuation method
NASA Technical Reports Server (NTRS)
Lim, K. B.; Juang, J.-N.; Kim, Z. C.
1989-01-01
A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
2008-10-01
et planification en ressources humaines militaires a aboli la norme de taille minimum des Forces Canadiennes. On a conclu que "les...015; Defence R&D Canada – Toronto; October 2008. Introduction ou contexte : En février 2002, le directeur général – politiques et planification en...arming cables. ....................................................... 6 Figure 4 Reach of full throttle (left) and fire bottle T -handles (right
Characterization for stability in planar conductivities
NASA Astrophysics Data System (ADS)
Faraco, Daniel; Prats, Martí
2018-05-01
We find a complete characterization for sets of uniformly strongly elliptic and isotropic conductivities with stable recovery in the L2 norm when the data of the Calderón Inverse Conductivity Problem is obtained in the boundary of a disk and the conductivities are constant in a neighborhood of its boundary. To obtain this result, we present minimal a priori assumptions which turn out to be sufficient for sets of conductivities to have stable recovery in a bounded and rough domain. The condition is presented in terms of the integral moduli of continuity of the coefficients involved and their ellipticity bound as conjectured by Alessandrini in his 2007 paper, giving explicit quantitative control for every pair of conductivities.
NASA Astrophysics Data System (ADS)
Özen, Kahraman Esen; Tosun, Murat
2018-01-01
In this study, we define the elliptic biquaternions and construct the algebra of elliptic biquaternions over the elliptic number field. Also we give basic properties of elliptic biquaternions. An elliptic biquaternion is in the form A0 + A1i + A2j + A3k which is a linear combination of {1, i, j, k} where the four components A0, A1, A2 and A3 are elliptic numbers. Here, 1, i, j, k are the quaternion basis of the elliptic biquaternion algebra and satisfy the same multiplication rules which are satisfied in both real quaternion algebra and complex quaternion algebra. In addition, we discuss the terms; conjugate, inner product, semi-norm, modulus and inverse for elliptic biquaternions.
Dolejs, Josef; Marešová, Petra
2017-01-01
The answer to the question "At what age does aging begin?" is tightly related to the question "Where is the onset of mortality increase with age?" Age affects mortality rates from all diseases differently than it affects mortality rates from nonbiological causes. Mortality increase with age in adult populations has been modeled by many authors, and little attention has been given to mortality decrease with age after birth. Nonbiological causes are excluded, and the category "all diseases" is studied. It is analyzed in Denmark, Finland, Norway, and Sweden during the period 1994-2011, and all possible models are screened. Age trajectories of mortality are analyzed separately: before the age category where mortality reaches its minimal value and after the age category. Resulting age trajectories from all diseases showed a strong minimum, which was hidden in total mortality. The inverse proportion between mortality and age fitted in 54 of 58 cases before mortality minimum. The Gompertz model with two parameters fitted as mortality increased with age in 17 of 58 cases after mortality minimum, and the Gompertz model with a small positive quadratic term fitted data in the remaining 41 cases. The mean age where mortality reached minimal value was 8 (95% confidence interval 7.05-8.95) years. The figures depict an age where the human population has a minimal risk of death from biological causes. Inverse proportion and the Gompertz model fitted data on both sides of the mortality minimum, and three parameters determined the shape of the age-mortality trajectory. Life expectancy should be determined by the two standard Gompertz parameters and also by the single parameter in the model c/x. All-disease mortality represents an alternative tool to study the impact of age. All results are based on published data.
PHOTOTROPISM OF GERMINATING MYCELIA OF SOME PARASITIC FUNGI
uredinales on young wheat plants; Distribution and significance of the phototropism of germinating mycelia -- confirmation of older data, examination of...eight additional uredinales, probable meaning of negative phototropism for the occurrence of infection; Analysis of the stimulus physiology of the...reaction -- the minimum effective illumination intensity, the effective special region, inversion of the phototropic reaction in liquid paraffin, the negative light- growth reaction, the light-sensitive zone.
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
Stratospheric sounding by infrared heterodyne spectroscopy
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Kunde, V. G.; Mumma, M. J.; Kostiuk, T.; Buhl, D.; Frerking, M. A.
1978-01-01
Intensity profiles of infrared spectral lines of stratospheric constituents can be fully resolved with a heterodyne spectrometer of sufficiently high resolution. The constituents' vertical distributions can then be evaluated accurately by analytic inversion of the measured line profiles. Estimates of the detection sensitivity of a heterodyne receiver are given in terms of minimum detectable volume mixing ratios of stratospheric constituents, indicating a large number of minor constituents which can be studied. Stratospheric spectral line shapes, and the resolution required to measure them are discussed in light of calculated synthetic line profiles for some stratospheric molecules in a model atmosphere. The inversion technique for evaluation of gas concentration profiles is briefly described and applications to synthetic lines of O3, CO2, CH4 and N2O are given.
VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm
NASA Astrophysics Data System (ADS)
Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo
2015-01-01
Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.
A new inversion algorithm for HF sky-wave backscatter ionograms
NASA Astrophysics Data System (ADS)
Feng, Jing; Ni, Binbin; Lou, Peng; Wei, Na; Yang, Longquan; Liu, Wen; Zhao, Zhengyu; Li, Xue
2018-05-01
HF sky-wave backscatter sounding system is capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density. The leading edge (LE) of a backscatter ionogram (BSI) is widely used for ionospheric inversion since it is hardly affected by any factors other than ionospheric electron density. Traditional BSI inversion methods have failed to distinguish LEs associated with different ionospheric layers, and simply utilize the minimum group path of each operating frequency, which generally corresponds to the LE associated with the F2 layer. Consequently, while the inversion results can provide accurate profiles of the F region below the F2 peak, the diagnostics may not be so effective for other ionospheric layers. In order to resolve this issue, we present a new BSI inversion method using LEs associated with different layers, which can further improve the accuracy of electron density distribution, especially the profile of the ionospheric layers below the F2 region. The efficiency of the algorithm is evaluated by computing the mean and the standard deviation of the differences between inverted parameter values and true values obtained from both vertical and oblique incidence sounding. Test results clearly manifest that the method we have developed outputs more accurate electron density profiles due to improvements to acquire the profiles of the layers below the F2 region. Our study can further improve the current BSI inversion methods on the reconstruction of 2-D electron density distribution in a vertical plane aligned with the direction of sounding.
Forward and inverse models of electromagnetic scattering from layered media with rough interfaces
NASA Astrophysics Data System (ADS)
Tabatabaeenejad, Seyed Alireza
This work addresses the problem of electromagnetic scattering from layered dielectric structures with rough boundaries and the associated inverse problem of retrieving the subsurface parameters of the structure using the scattered field. To this end, a forward scattering model based on the Small Perturbation Method (SPM) is developed to calculate the first-order spectral-domain bistatic scattering coefficients of a two-layer rough surface structure. SPM requires the boundaries to be slightly rough compared to the wavelength, but to understand the range of applicability of this method in scattering from two-layer rough surfaces, its region of validity is investigated by comparing its output with that of a first principle solver that does not impose roughness restrictions. The Method of Moments (MoM) is used for this purpose. Finally, for retrieval of the model parameters of the layered structure using scattered field, an inversion scheme based on the Simulated Annealing method is investigated and a strategy is proposed to address convergence to local minimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasetyo, Retno Agung, E-mail: prasetyo.agung@bmkg.go.id; Heryandoko, Nova; Afnimar
The source mechanism of earthquake on July 2, 2013 was investigated by using moment tensor inversion. The result also compared by the field observation. Five waveform data of BMKG’s seismic network used to estimate the mechanism of earthquake, namely ; KCSI, MLSI, LASI, TPTI and SNSI. Main shock data taken during 200 seconds and filtered by using Butterworth band pass method from 0.03 to 0.05 Hz of frequency. Moment tensor inversion method is applied based on the point source assumption. Furthermore, the Green function calculated using the extended reflectivity method which modified by Kohketsu. The inversion result showed a strike-slipmore » faulting, where the nodal plane strike/dip/rake (124/80.6/152.8) and minimum variance value 0.3285 at a depth of 6 km (centroid). It categorized as a shallow earthquake. Field observation indicated that the building orientated to the east. It can be related to the southwest of dip direction which has 152 degrees of slip. As conclusion, the Pressure (P) and Tension (T) axis described dominant compression is happen from the south which is caused by pressure of the Indo-Australian plate.« less
Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hojin; Becker, Stephen; Lee, Rena
2013-07-15
Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less
Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity.
Santos, Fabiane Igansi de Castro Dos; Marini, Naciele; Santos, Railson Schreinert Dos; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio; de Oliveira, Antonio Costa
2018-01-01
Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here.
Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity
dos Santos, Fabiane Igansi de Castro; Marini, Naciele; dos Santos, Railson Schreinert; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio
2018-01-01
Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here. PMID:29494624
Moving Forward with School Nutrition Policies: A Case Study of Policy Adherence in Nova Scotia.
McIsaac, Jessie-Lee D; Shearer, Cindy L; Veugelers, Paul J; Kirk, Sara F L
2015-12-01
Many Canadian school jurisdictions have developed nutrition policies to promote health and improve the nutritional status of children, but research is needed to clarify adherence, guide practice-related decisions, and move policy action forward. The purpose of this research was to evaluate policy adherence with a review of online lunch menus of elementary schools in Nova Scotia (NS) while also providing transferable evidence for other jurisdictions. School menus in NS were scanned and a list of commonly offered items were categorized, according to minimum, moderate, or maximum nutrition categories in the NS policy. The results of the menu review showed variability in policy adherence that depended on food preparation practices by schools. Although further research is needed to clarify preparation practices, the previously reported challenges of healthy food preparations (e.g., cost, social norms) suggest that many schools in NS are likely not able to use these healthy preparations, signifying potential noncompliance to the policy. Leadership and partnerships are needed among researchers, policy makers, and nutrition practitioners to address the complexity of issues related to food marketing and social norms that influence school food environments to inspire a culture where healthy and nutritious food is available and accessible to children.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Matoulek, Martin; Tuka, Vladimír; Fialová, Magdalena; Nevšímalová, Soňa; Šonka, Karel
2017-06-01
Cardiopulmonary fitness depends on daily energy expenditure or the amount of daily exercise. Patients with narcolepsy spent more time being sleepy or asleep than controls; thus we may speculate that they have a lower quantity and quality of physical activity. The aim of the present study was thus to test the hypothesis that exercise tolerance in narcolepsy negatively depends on sleepiness. The cross-sectional study included 32 patients with narcolepsy with cataplexy, 10 patients with narcolepsy without cataplexy, and 36 age- and gender-matched control subjects, in whom a symptom-limited exercise stress test with expired gas analysis was performed. A linear regression analysis with multivariate models was used with stepwise variable selection. In narcolepsy patients, maximal oxygen uptake (VO 2peak ) was 30.1 ± 7.5 mL/kg/min, which was lower than 36.0 ± 7.8 mL/kg/min, p = 0.001, in controls and corresponded to 86.4% ± 20.0% of the population norm (VO 2peak %) and to a standard deviation (VO 2peak SD) of -1.08 ± 1.63 mL/kg/min of the population norm. VO 2peak depended primarily on gender (p = 0.007) and on sleepiness (p = 0.046). VO 2peak % depended on sleepiness (p = 0.028) and on age (p = 0.039). VO 2peak SD depended on the number of cataplexy episodes per month (p = 0.015) and on age (p = 0.030). Cardiopulmonary fitness in narcolepsy and in narcolepsy without cataplexy is inversely related to the degree of sleepiness and cataplexy episode frequency. Copyright © 2017 Elsevier B.V. All rights reserved.
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
NASA Astrophysics Data System (ADS)
Abedi, Maysam; Fournier, Dominique; Devriese, Sarah G. R.; Oldenburg, Douglas W.
2018-05-01
This work presents the application of an integrated geophysical survey of magnetometry and frequency-domain electromagetic data (FDEM) to image a geological unit located in the Kalat-e-Reshm prospect area in Iran which has good potential for ore mineralization. The aim of this study is to concentrate on a 3D arc-shaped andesite unit, where it has been concealed by a sedimentary cover. This unit consists of two segments; the top one is a porphyritic andesite having potential for ore mineralization, especially copper, whereas the lower segment corresponds to an unaltered andesite rock. Airborne electromagnetic data were used to delineate the top segment as a resistive unit embedded in a sediment column of alluvial fan, while the lower andesite unit was detected by magnetic field data. In our research, the FDEM data were first inverted by a laterally-constrained 1D program to provide three pieces of information that facilitate full 3D inversion of EM data: (1) noise levels associated with the FDEM observations, (2) an estimate of the general conductivity structure in the prospect area, and (3) the location of the sought target. Then EM data inversion was extended to 3D using a parallelized OcTree-based code to better determine the boundaries of the porphyry unit, where a transition exists from surface sediment to the upper segment. Moreover, a mixed-norm inversion approach was taken into account for magnetic data to construct a compact and sharp susceptible andesite unit at depth, beneath the top resistive and non-susceptible segment. The blind geological unit was eventually interpreted based on a combined model of conductivity and magnetic susceptibility acquired from individually inverting these geophysical surveys, which were collected simultaneously.
Robust Adaptive Flight Control Design of Air-breathing Hypersonic Vehicles
2016-12-07
dynamic inversion controller design for a non -minimum phase hypersonic vehicle is derived by Kuipers et al. [2008]. Moreover, integrated guidance and...stabilization time for inner loop variables is lesser than the intermediate loop variables because of the three-loop-control design methodology . The control...adaptive design . Control Engineering Practice, 2016. Michael A Bolender and David B Doman. A non -linear model for the longitudinal dynamics of a
NASA Astrophysics Data System (ADS)
Mao, Zhangwen; Guo, Wei; Ji, Dianxiang; Zhang, Tianwei; Gu, Chenyi; Tang, Chao; Gu, Zhengbin; Nie*, Yuefeng; Pan, Xiaoqing
In situ reflection high-energy electron diffraction (RHEED) and its intensity oscillations are extremely important for the growth of epitaxial thin films with atomic precision. The RHEED intensity oscillations of complex oxides are, however, rather complicated and a general model is still lacking. Here, we report the unusual phase inversion and frequency doubling of RHEED intensity oscillations observed in the layer-by-layer growth of SrTiO3 using oxide molecular beam epitaxy. In contacts to the common understanding that the maximum(minimum) intensity occurs at SrO(TiO2) termination, respectively, we found that both maximum or minimum intensities can occur at SrO, TiO2, or even incomplete terminations depending on the incident angle of the electron beam, which raises a fundamental question if one can rely on the RHEED intensity oscillations to precisely control the growth of thin films. A general model including surface roughness and termination dependent mean inner potential qualitatively explains the observed phenomena, and provides the answer to the question how to prepare atomically and chemically precise surface/interfaces using RHEED oscillations for complex oxides. We thank National Basic Research Program of China (No. 11574135, 2015CB654901) and the National Thousand-Young-Talents Program.
The inverse problem for definition of the shape of a molten contact bridge
NASA Astrophysics Data System (ADS)
Kharin, Stanislav N.; Sarsengeldin, Merey M.
2017-09-01
The paper presents the results of investigation of bridging phenomenon occurring at opening of electrical contacts. The mathematical model describing the dynamics of metal molten bridge takes into account the Thomson effect. It is based on the system of partial differential equations for temperature and electrical fields of the bridge in the domain containing two moving unknown boundaries. One of them is an interface between liquid and solid zones of the bridge and should be found by the solution of the corresponding Stefan problem. The second free boundary corresponds to the shape of the visible part of a bridge. Its definition is an inverse problem, for which solution it is necessary to find minimum of the energy consuming for the formation of the shape of a quasi-stationary bridge. Three components of this energy, namely surface tension, pinch effect and gravitation, are defined by the functional which minimum gives the required shape of the bridge. The solution of corresponding variation problem is found by the reduction of the problem to the solution of the system of ordinary differential equations. Calculated values of the voltage of the bridge rupture for various metals are in a good agreement with the experimental data. The criteria responsible for the mechanism of molten bridge rupture are introduced in the paper.
RNAiFold: a web server for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-07-01
Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.
Neurocognitive screening of lead-exposed andean adolescents and young adults.
Counter, S Allen; Buchanan, Leo H; Ortega, Fernando
2009-01-01
This study was designed to assess the utility of two psychometric tests with putative minimal cultural bias for use in field screening of lead (Pb)-exposed Ecuadorian Andean workers. Specifically, the study evaluated the effectiveness in Pb-exposed adolescents and young adults of a nonverbal reasoning test standardized for younger children, and compared the findings with performance on a test of auditory memory. The Raven Coloured Progressive Matrices (RCPM) was used as a test of nonverbal intelligence, and the Digit Span subtest of the Wechsler IV intelligence scale was used to assess auditory memory/attention. The participants were 35 chronically Pb-exposed Pb-glazing workers, aged 12-21 yr. Blood lead (PbB) levels for the study group ranged from 3 to 86 microg/dl, with 65.7% of the group at and above 10 microg/dl. Zinc protoporphyrin heme ratios (ZPP/heme) ranged from 38 to 380 micromol/mol, with 57.1% of the participants showing abnormal ZPP/heme (>69 micromol/mol). ZPP/heme was significantly correlated with PbB levels, suggesting chronic Pb exposure. Performance on the RCPM was less than average on the U.S., British, and Puerto Rican norms, but average on the Peruvian norms. Significant inverse associations between PbB/ZPP concentrations and RCPM standard scores using the U.S., Puerto Rican, and Peruvian norms were observed, indicating decreasing RCPM test performance with increasing PbB and ZPP levels. RCPM scores were significantly correlated with performance on the Digit Span test for auditory memory. Mean Digit Span scale score was less than average, suggesting auditory memory/attention deficits. In conclusion, both the RCPM and Digit Span tests were found to be effective instruments for field screening of visual-spatial reasoning and auditory memory abilities, respectively, in Pb-exposed Andean adolescents and young adults.
NASA Astrophysics Data System (ADS)
Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.
2014-12-01
We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.
Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Askan, A.; /Carnegie Mellon U.; Akcelik, V.
2009-04-30
We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less
2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Brossier, R.; Virieux, J.; Operto, S.
2008-12-01
Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.
Solar wind electron densities from Viking dual-frequency radio measurements
NASA Technical Reports Server (NTRS)
Muhleman, D. O.; Anderson, J. D.
1981-01-01
Simultaneous phase coherent, two-frequency measurements of the time delay between the earth station and the Viking spacecraft have been analyzed in terms of the electron density profiles from 4 solar radii to 200 solar radii. The measurements were made during a period of solar activity minimum (1976-1977) and show a strong solar latitude effect. The data were analyzed with both a model independent, direct numerical inversion technique and with model fitting, yielding essentially the same results. It is shown that the solar wind density can be represented by two power laws near the solar equator proportional to r exp -2.7 and r exp -2.04. However, the more rapidly falling term quickly disappears at moderate latitudes (approximately 20 deg) leaving only the inverse-square behavior.
Inverse design engineering of all-silicon polarization beam splitters
NASA Astrophysics Data System (ADS)
Frandsen, Lars H.; Sigmund, Ole
2016-03-01
Utilizing the inverse design engineering method of topology optimization, we have realized high-performing all-silicon ultra-compact polarization beam splitters. We show that the device footprint of the polarization beam splitter can be as compact as ~2 μm2 while performing experimentally with a polarization splitting loss lower than ~0.82 dB and an extinction ratio larger than ~15 dB in the C-band. We investigate the device performance as a function of the device length and find a lower length above which the performance only increases incrementally. Imposing a minimum feature size constraint in the optimization is shown to affect the performance negatively and reveals the necessity for light to scatter on a sub-wavelength scale to obtain functionalities in compact photonic devices.
Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang
2017-07-01
It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.
Xiao, Hanguang; Tan, Isabella; Butlin, Mark; Li, Decai; Avolio, Alberto P
2018-03-01
Arterial wave reflection has been shown to have a significant dependence on heart rate (HR). However, the underlying mechanisms inherent in the HR dependency of wave reflection have not been well established. This study aimed to investigate the potential mechanisms and role of arterial viscoelasticity using a 55-segment transmission line model of the human arterial tree combined with a fractional viscoelastic model. At varying degrees of viscoelasticity modeled as fractional order parameter α, reflection magnitude (RM), reflection index (RI), augmentation index (AIx), and a proposed novel normalized reflection coefficient (Γ norm ) were estimated at different HRs from 60 to 100 beats/min with a constant mean flow of 70 ml/s. RM, RI, AIx, and Γ norm at the ascending aorta decreased linearly with increasing HR at all degrees of viscoelasticity. The means ± SD of the HR dependencies of RM, RI, AIx, and Γ norm were -0.042 ± 0.004, -0.018 ± 0.001, -1.93 ± 0.55%, and -0.037 ± 0.002 per 10 beats/min, respectively. There was a significant and nonlinear reduction in RM, RI, and Γ norm with increasing α at all HRs. In addition, HR and α have a more pronounced effect on wave reflection at the aorta than at peripheral arteries. The potential mechanism of the HR dependency of wave reflection was explained by the inverse dependency of the reflection coefficient on frequency, with the harmonics of the pulse waveform moving toward higher frequencies with increasing HR. This HR dependency can be modulated by arterial viscoelasticity. NEW & NOTEWORTHY This in silico study addressed the underlying mechanisms of how heart rate influences arterial wave reflection based on a transmission line model and elucidated the role of arterial viscoelasticity in the dependency of arterial wave reflection on heart rate. This study provides insights into wave reflection as a frequency-dependent phenomenon and demonstrates the validity of using reflection magnitude and reflection index as wave reflection indexes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Connell, D.R.
1986-12-01
The method of progressive hypocenter-velocity inversion has been extended to incorporate S-wave arrival time data and to estimate S-wave velocities in addition to P-wave velocities. S-wave data to progressive inversion does not completely eliminate hypocenter-velocity tradeoffs, but they are substantially reduced. Results of a P and S-wave progressive hypocenter-velocity inversion at The Geysers show that the top of the steam reservoir is clearly defined by a large decrease of V/sub p//V/sub s/ at the condensation zone-production zone contact. The depth interval of maximum steam production coincides with minimum observed V/sub p//V/sub s/, and V/sub p//V/sub s/ increses below the shallowmore » primary production zone suggesting that reservoir rock becomes more fluid saturated. The moment tensor inversion method was applied to three microearthquakes at The Geysers. Estimated principal stress orientations were comparable to those estimated using P-wave firstmotions as constraints. Well constrained principal stress orientations were obtained for one event for which the 17 P-first motions could not distinguish between normal-slip and strike-slip mechanisms. The moment tensor estimates of principal stress orientations were obtained using far fewer stations than required for first-motion focal mechanism solutions. The three focal mechanisms obtained here support the hypothesis that focal mechanisms are a function of depth at The Geysers. Progressive inversion as developed here and the moment tensor inversion method provide a complete approach for determining earthquake locations, P and S-wave velocity structure, and earthquake source mechanisms.« less
On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
1997-01-01
In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.
Observations of non-linear plasmon damping in dense plasmas
NASA Astrophysics Data System (ADS)
Witte, B. B. L.; Sperling, P.; French, M.; Recoules, V.; Glenzer, S. H.; Redmer, R.
2018-05-01
We present simulations using finite-temperature density-functional-theory molecular-dynamics to calculate dynamic dielectric properties in warm dense aluminum. The comparison between exchange-correlation functionals in the Perdew, Burke, Ernzerhof approximation, Strongly Constrained and Appropriately Normed Semilocal Density Functional, and Heyd, Scuseria, Ernzerhof (HSE) approximation indicates evident differences in the electron transition energies, dc conductivity, and Lorenz number. The HSE calculations show excellent agreement with x-ray scattering data [Witte et al., Phys. Rev. Lett. 118, 225001 (2017)] as well as dc conductivity and absorption measurements. These findings demonstrate non-Drude behavior of the dynamic conductivity above the Cooper minimum that needs to be taken into account to determine optical properties in the warm dense matter regime.
Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties
Chi, Eric C.; Lange, Kenneth
2014-01-01
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662
The primary prevention of alcohol problems: a critical review of the research literature.
Moskowitz, J M
1989-01-01
The research evaluating the effects of programs and policies in reducing the incidence of alcohol problems is critically reviewed. Four types of preventive interventions are examined including: (1) policies affecting the physical, economic and social availability of alcohol (e.g., minimum legal drinking age, price and advertising of alcohol), (2) formal social controls on alcohol-related behavior (e.g., drinking-driving laws), (3) primary prevention programs (e.g., school-based alcohol education), and (4) environmental safety measures (e.g., automobile airbags). The research generally supports the efficacy of three alcohol-specific policies: raising the minimum legal drinking age to 21, increasing alcohol taxes and increasing the enforcement of drinking-driving laws. Also, research suggests that various environmental safety measures reduce the incidence of alcohol-related trauma. In contrast, little evidence currently exists to support the efficacy of primary prevention programs. However, a systems perspective of prevention suggests that prevention programs may become more efficacious after widespread adoption of prevention policies that lead to shifts in social norms regarding use of beverage alcohol.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Low-Current, Xenon Orificed Hollow Cathode Performance for In-Space Applications
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Patterson, Michael J.; Gallimore, Alec D.
2002-01-01
An experimental investigation of the operating characteristics of 3.2-mm diameter orificed hollow cathodes was conducted to examine low current and low flow rate operation. Cathode power was minimized with an orifice aspect ratio of approximately one and the use of an enclosed keeper. Cathode flow rate requirements were proportional to orifice diameter and the inverse of the orifice length. The minimum power consumption in diode mode was 10-W, and the minimum mass flow rate required for spot-mode emission was approximately 0.08-mg/s. Cathode temperature profiles were obtained using an imaging radiometer and conduction was found to be the dominant heat transfer mechanism from the cathode tube. Orifice plate temperatures were found to be weakly dependent upon the flow rate and strongly dependent upon the current.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Jiang, Hua; Lu, Wenke; Zhang, Guoan
2013-07-01
In this paper, we propose a low insertion loss and miniaturization wavelet transform and inverse transform processor using surface acoustic wave (SAW) devices. The new SAW wavelet transform devices (WTDs) use the structure with two electrode-widths-controlled (EWC) single phase unidirectional transducers (SPUDT-SPUDT). This structure consists of the input withdrawal weighting interdigital transducer (IDT) and the output overlap weighting IDT. Three experimental devices for different scales 2(-1), 2(-2), and 2(-3) are designed and measured. The minimum insertion loss of the three devices reaches 5.49dB, 4.81dB, and 5.38dB respectively which are lower than the early results. Both the electrode width and the number of electrode pairs are reduced, thus making the three devices much smaller than the early devices. Therefore, the method described in this paper is suitable for implementing an arbitrary multi-scale low insertion loss and miniaturization wavelet transform and inverse transform processor using SAW devices. Copyright © 2013 Elsevier B.V. All rights reserved.
Atif, Muhammad; Sulaiman, Syed Azhar Syed; Shafie, Asrul Akmal; Asif, Muhammad; Ahmad, Nafees
2013-10-01
The aim of the study was to obtain norms of the SF-36v2 health survey and the association of summary component scores with socio-demographic variables in healthy households of tuberculosis (TB) patients. All household members (18 years and above; healthy; literate) of registered tuberculosis patients who came for contact tracing during March 2010 to February 2011 at the respiratory clinic of Penang General Hospital were invited to complete the SF-36v2 health survey using the official translation of the questionnaire in Malay, Mandarin, Tamil and English. Scoring of the questionnaire was done using Quality Metric's QM Certified Scoring Software version 4. Multivariate analysis was conducted to uncover the predictors of physical and mental health. A total of 649 eligible respondents were approached, while 525 agreed to participate in the study (response rate = 80.1 %). Out of consenting respondents, 46.5 % were male and only 5.3 % were over 75 years. Internal consistencies met the minimum criteria (α > 0.7). Reliability coefficients of the scales were always less than their own reliability coefficients. Mean physical component summary scale scores were equivalent to United States general population norms. However, there was a difference of more than three norm-based scoring points for mean mental component summary scores indicating poor mental health. A notable proportion of the respondents was at the risk of depression. Respondents aged 75 years and above (p = 0.001; OR 32.847), widow (p = 0.013; OR 2.599) and postgraduates (p < 0.001; OR 7.865) were predictors of poor physical health while unemployment (p = 0.033; OR 1.721) was the only predictor of poor mental health. The SF-36v2 is a valid instrument to assess HRQoL among the households of TB patients. Study findings indicate the existence of poor mental health and risk of depression among family caregivers of TB patients. We therefore recommend that caregivers of TB patients to be offered intensive support and special attention to cope with these emotional problems.
2014-01-01
Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422
Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu
2014-06-05
Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.
Dong, Wei-Feng; Canil, Sarah; Lai, Raymond; Morel, Didier; Swanson, Paul E.; Izevbaye, Iyare
2018-01-01
A new automated MYC IHC classifier based on bivariate logistic regression is presented. The predictor relies on image analysis developed with the open-source ImageJ platform. From a histologic section immunostained for MYC protein, 2 dimensionless quantitative variables are extracted: (a) relative distance between nuclei positive for MYC IHC based on euclidean minimum spanning tree graph and (b) coefficient of variation of the MYC IHC stain intensity among MYC IHC-positive nuclei. Distance between positive nuclei is suggested to inversely correlate MYC gene rearrangement status, whereas coefficient of variation is suggested to inversely correlate physiological regulation of MYC protein expression. The bivariate classifier was compared with 2 other MYC IHC classifiers (based on percentage of MYC IHC positive nuclei), all tested on 113 lymphomas including mostly diffuse large B-cell lymphomas with known MYC fluorescent in situ hybridization (FISH) status. The bivariate classifier strongly outperformed the “percentage of MYC IHC-positive nuclei” methods to predict MYC+ FISH status with 100% sensitivity (95% confidence interval, 94-100) associated with 80% specificity. The test is rapidly performed and might at a minimum provide primary IHC screening for MYC gene rearrangement status in diffuse large B-cell lymphomas. Furthermore, as this bivariate classifier actually predicts “permanent overexpressed MYC protein status,” it might identify nontranslocation-related chromosomal anomalies missed by FISH. PMID:27093450
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
COMPUTATIONAL ANALYSIS OF SWALLOWING MECHANICS UNDERLYING IMPAIRED EPIGLOTTIC INVERSION
Pearson, William G.; Taylor, Brandon K; Blair, Julie; Martin-Harris, Bonnie
2015-01-01
Objective Determine swallowing mechanics associated with the first and second epiglottic movements, that is, movement to horizontal and full inversion respectively, in order to provide a clinical interpretation of impaired epiglottic function. Study Design Retrospective cohort study. Methods A heterogeneous cohort of patients with swallowing difficulties was identified (n=92). Two speech-language pathologists reviewed 5ml thin and 5ml pudding videofluoroscopic swallow studies per subject, and assigned epiglottic component scores of 0=complete inversion, 1=partial inversion, and 2=no inversion forming three groups of videos for comparison. Coordinates mapping minimum and maximum excursion of the hyoid, pharynx, larynx, and tongue base during pharyngeal swallowing were recorded using ImageJ software. A canonical variate analysis with post-hoc discriminant function analysis of coordinates was performed using MorphoJ software to evaluate mechanical differences between groups. Eigenvectors characterizing swallowing mechanics underlying impaired epiglottic movements were visualized. Results Nineteen of 184 video-swallows were rejected for poor quality (n=165). A Goodman-Kruskal index of predictive association showed no correlation between epiglottic component scores and etiologies of dysphagia (λ=.04). A two-way analysis of variance by epiglottic component scores showed no significant interaction effects between sex and age (f=1.4, p=.25). Discriminant function analysis demonstrated statistically significant mechanical differences between epiglottic component scores: 1&2, representing the first epiglottic movement (Mahalanobis distance=1.13, p=.0007); and, 0&1, representing the second epiglottic movement (Mahalanobis distance=0.83, p=.003). Eigenvectors indicate that laryngeal elevation and tongue base retraction underlie both epiglottic movements. Conclusion Results suggest that reduced tongue base retraction and laryngeal elevation underlie impaired first and second epiglottic movements. The styloglossus, hyoglossus and long pharyngeal muscles are implicated as targets for rehabilitation in dysphagic patients with impaired epiglottic inversion. PMID:27426940
Benefits and risks of adopting the global code of practice for recreational fisheries
Arlinghaus, Robert; Beard, T. Douglas; Cooke, Steven J.; Cowx, Ian G.
2012-01-01
Recreational fishing constitutes the dominant or sole use of many fish stocks, particularly in freshwater ecosystems in Western industrialized countries. However, despite their social and economic importance, recreational fisheries are generally guided by local or regional norms and standards, with few comprehensive policy and development frameworks existing across jurisdictions. We argue that adoption of a recently developed Global Code of Practice (CoP) for Recreational Fisheries can provide benefits for moving recreational fisheries toward sustainability on a global scale. The CoP is a voluntary document, specifically framed toward recreational fisheries practices and issues, thereby complementing and extending the United Nation's Code of Conduct for Responsible Fisheries by the Food and Agricultural Organization. The CoP for Recreational Fisheries describes the minimum standards of environmentally friendly, ethically appropriate, and—depending on local situations—socially acceptable recreational fishing and its management. Although many, if not all, of the provisions presented in the CoP are already addressed through national fisheries legislation and state-based fisheries management regulations in North America, adopting a common framework for best practices in recreational fisheries across multiple jurisdictions would further promote their long-term viability in the face of interjurisdictional angler movements and some expanding threats to the activity related to shifting sociopolitical norms.
A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.
Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin
2018-07-01
Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Correlations of catalytic combustor performance parameters
NASA Technical Reports Server (NTRS)
Bulzan, D. L.
1978-01-01
Correlations for combustion efficiency percentage drop and the minimum required adiabatic reaction temperature necessary to meet emissions goals of 13.6 g CO/kg fuel and 1.64 g HC/kg fuel are presented. Combustion efficiency was found to be a function of the cell density, cell circumference, reactor length, reference velocity, and adiabatic reaction temperature. The percentage pressure drop at an adiabatic reaction temperature of 1450 K was found to be proportional to the reference velocity to the 1.5 power and to the reactor length. It is inversely proportional to the pressure, cell hydraulic diameter, and fractional open area. The minimum required adiabatic reaction temperature was found to increase with reference velocity and decrease with cell circumference, cell density and reactor length. A catalyst factor was introduced into the correlations to account for differences between catalysts. Combustion efficiency, the percentage pressure drop, and the minimum required adiabatic reaction temperature were found to be a function of the catalyst factor. The data was from a 12 cm-diameter test rig with noble metal reactors using propane fuel at an inlet temperature of 800 K.
Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.
Hirschfeld, T
1974-07-01
When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.
Solar activity and oscillation frequency splittings
NASA Technical Reports Server (NTRS)
Woodard, M. F.; Libbrecht, K. G.
1993-01-01
Solar p-mode frequency splittings, parameterized by the coefficients through order N = 12 of a Legendre polynomial expansion of the mode frequencies as a function of m/L, were obtained from an analysis of helioseismology data taken at Big Bear Solar Observatory during the 4 years 1986 and 1988-1990 (approximately solar minimum to maximum). Inversion of the even-index splitting coefficients confirms that there is a significant contribution to the frequency splittings originating near the solar poles. The strength of the polar contribution is anti correlated with the overall level or solar activity in the active latitudes, suggesting a relation to polar faculae. From an analysis of the odd-index splitting coefficients we infer an uppor limit to changes in the solar equatorial near-surface rotatinal velocity of less than 1.9 m/s (3 sigma limit) between solar minimum and maximum.
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
The research subject as wage earner.
Anderson, James A; Weijer, Charles
2002-01-01
The practice of paying research subjects for participating in clinical trials has yet to receive an adequate moral analysis. Dickert and Grady argue for a wage payment model in which research subjects are paid an hourly wage based on that of unskilled laborers. If we accept this approach, what follows? Norms for just working conditions emerge from workplace legislation and political theory. All workers, including paid research subjects under Dickert and Grady's analysis, have a right to at least minimum wage, a standard work week, extra pay for overtime hours, a safe workplace, no fault compensation for work-related injury, and union organization. If we accept that paid research subjects are wage earners like any other, then the implications for changes to current practice are substantial.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
A Handful of Paragraphs on "Translation" and "Norms."
ERIC Educational Resources Information Center
Toury, Gideon
1998-01-01
Presents some thoughts on the issue of translation and norms, focusing on the relationships between social agreements, conventions, and norms; translational norms; acts of translation and translation events; norms and values; norms for translated texts versus norms for non-translated texts; and competing norms. Comments on the reactions to three…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luh, G.C.
1994-01-01
This thesis presents the application of advanced modeling techniques to construct nonlinear forward and inverse models of internal combustion engines for the detection and isolation of incipient faults. The NARMAX (Nonlinear Auto-Regressive Moving Average modeling with eXogenous inputs) technique of system identification proposed by Leontaritis and Billings was used to derive the nonlinear model of a internal combustion engine, over operating conditions corresponding to the I/M240 cycle. The I/M240 cycle is a standard proposed by the United States Environmental Protection Agency to measure tailpipe emissions in inspection and maintenance programs and consists of a driving schedule developed for the purposemore » of testing compliance with federal vehicle emission standards for carbon monoxide, unburned hydrocarbons, and nitrogen oxides. The experimental work for model identification and validation was performed on a 3.0 liter V6 engine installed in an engine test cell at the Center for Automotive Research at The Ohio State University. In this thesis, different types of model structures were proposed to obtain multi-input multi-output (MIMO) nonlinear NARX models. A modification of the algorithm proposed by He and Asada was used to estimate the robust orders of the derived MIMO nonlinear models. A methodology for the analysis of inverse NARX model was developed. Two methods were proposed to derive the inverse NARX model: (1) inversion from the forward NARX model; and (2) direct identification of inverse model from the output-input data set. In this thesis, invertibility, minimum-phase characteristic of zero dynamics, and stability analysis of NARX forward model are also discussed. Stability in the sense of Lyapunov is also investigated to check the stability of the identified forward and inverse models. This application of inverse problem leads to the estimation of unknown inputs and to actuator fault diagnosis.« less
A norm knockout method on indirect reciprocity to reveal indispensable norms
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-01-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out. PMID:28276485
A norm knockout method on indirect reciprocity to reveal indispensable norms
NASA Astrophysics Data System (ADS)
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-03-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
NASA Astrophysics Data System (ADS)
Cao, Pei; Qi, Shuai; Tang, J.
2018-03-01
The impedance/admittance measurements of a piezoelectric transducer bonded to or embedded in a host structure can be used as damage indicator. When a credible model of the healthy structure, such as the finite element model, is available, using the impedance/admittance change information as input, it is possible to identify both the location and severity of damage. The inverse analysis, however, may be under-determined as the number of unknowns in high-frequency analysis is usually large while available input information is limited. The fundamental challenge thus is how to find a small set of solutions that cover the true damage scenario. In this research we cast the damage identification problem into a multi-objective optimization framework to tackle this challenge. With damage locations and severities as unknown variables, one of the objective functions is the difference between impedance-based model prediction in the parametric space and the actual measurements. Considering that damage occurrence generally affects only a small number of elements, we choose the sparsity of the unknown variables as another objective function, deliberately, the l 0 norm. Subsequently, a multi-objective Dividing RECTangles (DIRECT) algorithm is developed to facilitate the inverse analysis where the sparsity is further emphasized by sigmoid transformation. As a deterministic technique, this approach yields results that are repeatable and conclusive. In addition, only one algorithmic parameter, the number of function evaluations, is needed. Numerical and experimental case studies demonstrate that the proposed framework is capable of obtaining high-quality damage identification solutions with limited measurement information.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel
2016-04-01
The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The work has been partially supported by RFBR grant 14-01-00125 and RAS Presidium II.4P. References: [1] Penenko V.V., Tsvetova E.A., Penenko A.V. Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry // IZVESTIYA ATMOSPHERIC AND OCEANIC PHYSICS, 2015, v 51 , p. 311 - 319 [2] A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. [3] A. Penenko; V. Penenko; R. Nuterman; A. Baklanov and A. Mahura Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model, Proc. SPIE 9680, 21st International Symposium Atmospheric and Ocean Optics: Atmospheric Physics, 968076 (November 19, 2015); doi:10.1117/12.2206008;http://dx.doi.org/10.1117/12.2206008
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi
2015-08-01
We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
[Regret of female sterilization].
Öhman, Malin Charlotta; Andersen, Lars Franch
2015-11-16
Regret of sterilization is inversely correlated to age at the time of sterilization. The minimum age for legal sterilization in Denmark has recently been lowered to 18 years. In Denmark surgical refertilization has almost completely been replaced by in vitro fertilization (IVF). In recent literature pregnancy results after surgical refertilization are easily comparable to IVF. Refertilization may in some cases be advantageous to IVF treatment. Women requesting reversal of sterilization should be offered individualized evaluation and differentiated treatment. It is recommended that surgical refertilization is performed at very few centres.
Method for determining formation quality factor from seismic data
Taner, M. Turhan; Treitel, Sven
2005-08-16
A method is disclosed for calculating the quality factor Q from a seismic data trace. The method includes calculating a first and a second minimum phase inverse wavelet at a first and a second time interval along the seismic data trace, synthetically dividing the first wavelet by the second wavelet, Fourier transforming the result of the synthetic division, calculating the logarithm of this quotient of Fourier transforms and determining the slope of a best fit line to the logarithm of the quotient.
What humankind can expect with an inversion of Earth’s magnetic field: threats real and imagined
NASA Astrophysics Data System (ADS)
Tsareva, O. O.; Zelenyi, L. M.; Malova, H. V.; Podzolko, M. V.; Popova, E. P.; Popov, V. Yu
2018-02-01
Earth’s global magnetic field generated by an internal dynamo mechanism has been continuously changing on different time scales since its formation. Paleodata indicate that relatively long periods of evolutionary changes can be replaced by quick magnetic inversions. Based on observations, Earth’s magnetic field is currently weakening and the magnetic poles are shifting, possibly indicating the beginning of the inversion process. This paper invokes Gauss coefficients to approximate the behavior of Earth’s magnetic field components over the past 100 years. Using the extrapolation method, it is estimated that the magnetic dipole component will vanish by the year 3600 and at that time the geomagnetic field will be determined by a smaller value of a quadrupole magnetic component. A numerical model is constructed which allows evaluating and comparing both galactic and solar cosmic ray fluxes in Earth’s magnetosphere and on its surface during periods of dipole or quadrupole domination. The role of the atmosphere in absorbing particles of cosmic rays is taken into account. An estimate of the radiation danger to humans is obtained for the ground level and for the International Space Station altitude of ∼ 400 km. It is shown that in the most unfavorable, minimum field interval of the inversion process, the galactic cosmic ray flux increases by no more than a factor of three, implying that the radiation danger does not exceed the maximum permissible dose. Thus, the danger of magnetic inversion periods generally should not have fatal consequences for humans and nature as a whole, despite dramatically changing the structure of Earth’s magnetosphere.
Engineering bacteria to solve the Burnt Pancake Problem
Haynes, Karmella A; Broderick, Marian L; Brown, Adam D; Butner, Trevor L; Dickson, James O; Harden, W Lance; Heard, Lane H; Jessen, Eric L; Malloy, Kelly J; Ogden, Brad J; Rosemond, Sabriya; Simpson, Samantha; Zwack, Erin; Campbell, A Malcolm; Eckdahl, Todd T; Heyer, Laurie J; Poet, Jeffrey L
2008-01-01
Background We investigated the possibility of executing DNA-based computation in living cells by engineering Escherichia coli to address a classic mathematical puzzle called the Burnt Pancake Problem (BPP). The BPP is solved by sorting a stack of distinct objects (pancakes) into proper order and orientation using the minimum number of manipulations. Each manipulation reverses the order and orientation of one or more adjacent objects in the stack. We have designed a system that uses site-specific DNA recombination to mediate inversions of genetic elements that represent pancakes within plasmid DNA. Results Inversions (or "flips") of the DNA fragment pancakes are driven by the Salmonella typhimurium Hin/hix DNA recombinase system that we reconstituted as a collection of modular genetic elements for use in E. coli. Our system sorts DNA segments by inversions to produce different permutations of a promoter and a tetracycline resistance coding region; E. coli cells become antibiotic resistant when the segments are properly sorted. Hin recombinase can mediate all possible inversion operations on adjacent flippable DNA fragments. Mathematical modeling predicts that the system reaches equilibrium after very few flips, where equal numbers of permutations are randomly sorted and unsorted. Semiquantitative PCR analysis of in vivo flipping suggests that inversion products accumulate on a time scale of hours or days rather than minutes. Conclusion The Hin/hix system is a proof-of-concept demonstration of in vivo computation with the potential to be scaled up to accommodate larger and more challenging problems. Hin/hix may provide a flexible new tool for manipulating transgenic DNA in vivo. PMID:18492232
Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.
1996-01-01
Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
Mollborn, Stefanie; Domingue, Benjamin W; Boardman, Jason D
2014-06-01
Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys' perceived norms, while peer network norms predict girls' perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens' likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys' contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors.
Mollborn, Stefanie; Domingue, Benjamin W.; Boardman, Jason D.
2014-01-01
Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys’ perceived norms, while peer network norms predict girls’ perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens’ likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys’ contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors. PMID:25104920
Nature of electron trap states under inversion at In0.53Ga0.47As/Al2O3 interfaces
NASA Astrophysics Data System (ADS)
Colleoni, Davide; Pourtois, Geoffrey; Pasquarello, Alfredo
2017-03-01
In and Ga impurities substitutional to Al in the oxide layer resulting from diffusion out of the substrate are identified as candidates for electron traps under inversion at In0.53Ga0.47As/Al2O3 interfaces. Through density-functional calculations, these defects are found to be thermodynamically stable in amorphous Al2O3 and to be able to capture two electrons in a dangling bond upon breaking bonds with neighboring O atoms. Through a band alignment based on hybrid functional calculations, it is inferred that the corresponding defect levels lie at ˜1 eV above the conduction band minimum of In0.53Ga0.47As, in agreement with measured defect densities. These results support the technological importance of avoiding cation diffusion into the oxide layer.
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
NASA Astrophysics Data System (ADS)
Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert
2016-08-01
Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.
Diel cycles in dissolved metal concentrations in streams: Occurrence and possible causes
Nimick, David A.; Gammons, Christopher H.; Cleasby, Thomas E.; Madison, James P.; Skaar, Don; Brick, Christine M.
2003-01-01
Substantial diel (24‐hour) cycles in dissolved (0.1‐μm filtration) metal concentrations were observed during low flow for 18 sampling episodes at 14 sites on 12 neutral and alkaline streams draining historical mining areas in Montana and Idaho. At some sites, concentrations of Cd, Mn, Ni, and Zn increased as much as 119, 306, 167, and 500%, respectively, from afternoon minimum values to maximum values shortly after sunrise. Arsenic concentrations exhibited the inverse temporal pattern with increases of up to 54%. Variations in Cu concentrations were small and inconsistent. Diel metal cycles are widespread and persistent, occur over a wide range of metal concentrations, and likely are caused primarily by instream geochemical processes. Adsorption is the only process that can explain the inverse temporal patterns of As and the divalent metals. Diel metal cycles have important implications for many types of water‐quality studies and for understanding trace‐metal mobility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteman, Charles D.; Haiden, Thomas S.; Pospichal, Bernhard
2004-08-01
Air temperature data from five enclosed limestone sinkholes of various sizes and shapes on the 1300 m MSL Duerrenstein Plateau near Lunz, Austria have been analyzed to determine the effect of sinkhole geometry on temperature minima, diurnal temperature ranges, temperature inversion strengths and vertical temperature gradients. Data were analyzed for a non-snow-covered October night and for a snow-covered December night when the temperature fell as low as -28.5°C. Surprisingly, temperatures were similar in two sinkholes with very different drainage areas and depths. A three-layer model was used to show that the sky-view factor is the most important topographic parameter controllingmore » cooling for basins in this size range and that the cooling slows when net longwave radiation at the floor of the sinkhole is nearly balanced by the ground heat flux.« less
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Norm-Aware Socio-Technical Systems
NASA Astrophysics Data System (ADS)
Savarimuthu, Bastin Tony Roy; Ghose, Aditya
The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions
Eigenbeam analysis of the diversity in bat biosonar beampatterns.
Caspers, Philip; Müller, Rolf
2015-03-01
A quantitative analysis of the interspecific variability in bat biosonar beampatterns has been carried out on 267 numerical predictions of emission and reception beampatterns from 98 different species. Since these beampatterns did not share a common orientation, an alignment was necessary to analyze the variability in the shape of the patterns. To achieve this, beampatterns were aligned using a pairwise optimization framework based on a rotation-dependent cost function. The sum of the p-norms between beam-gain functions across frequency served as a figure of merit. For a representative subset of the data, it was found that all pairwise beampattern alignments resulted in a unique global minimum. This minimum was found to be contained in a subset of all possible beampattern rotations that could be predicted by the overall beam orientation. Following alignment, the beampatterns were decomposed into principal components. The average beampattern consisted of a symmetric, positionally static single lobe that narrows and became progressively asymmetric with increasing frequency. The first three "eigenbeams" controlled the beam width of the beampattern across frequency while higher rank eigenbeams account for symmetry and lobe motion. Reception and emission beampatterns could be distinguished (85% correct classification) based on the first 14 eigenbeams.
NASA Astrophysics Data System (ADS)
Pfeiler, Stefan; Schöner, Wolfgang; Reisenhofer, Stefan; Ottowitz, David; Jochum, Birgit; Kim, Jung-Ho; Hoyer, Stefan; Supper, Robert; Heinrich, Georg
2016-04-01
In the Alps infrastructure facilities such as roads, routes or buildings are affected by the changes of permafrost, which often cause enormous reparation costs. Investigation on degradation of Alpine Permafrost in the last decade has increased, however, the understanding of the permafrost changes inducing its atmospheric forcing processes is still insufficient. Within the project ATMOperm the application of the geoelectrical method to estimate thawing layer thickness for mountain permafrost is investigated near the highest meteorological observatory of Austria on the Hoher Sonnblick. Therefore, it is necessary to further optimize the transformation of ERT data to thermal changes in the subsurface. Based on an innovative time lapse inversion routine for ERT data (Kim J.-H. et al 2013) a newly developed data analysis software tool developed by Kim Jung-Ho (KIGAM) in cooperation with the Geophysics group of the Geological Survey of Austria allows the statistical analysis of the entire sample set of each and every data point measured by the geoelectrical monitoring instrument. This gives on the one hand of course an enhanced opportunity to separate between „good" and „bad" data points in order to assess the quality of measurements. On the other hand, the results of the statistical analysis define the impact of every single data point on the inversion routine. The interpretation of the inversion results will be supplemented by temperature logs from selected boreholes along the ERT profile as well as climatic parameters. KIM J.-H., SUPPER R., TSOURLOS P. and YI M.-J.: Four-dimensional inversion of resistivity monitoring data through Lp norm minimizations. - Geophysical Journal International, 195(3), 1640-1656, 2013. Doi: 10.1093/gji/ggt324. (No OA) Acknowledgments: The geoelectrical monitoring on Hoher Sonnblick has been installed and is operated in the frame of the project ATMOperm (Atmosphere - permafrost relationship in the Austrian Alps - atmospheric extreme events and their relevance for the mean state of the active layer) funded by the Austrian Academy of Science (ÖAW)
Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; Sigloch, Karin
2016-11-01
Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.
Kozunov, Vladimir V.; Ossadtchi, Alexei
2015-01-01
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141
NASA Astrophysics Data System (ADS)
Zhu, Lupei; Zhou, Xiaofeng
2016-10-01
Source inversion of small-magnitude events such as aftershocks or mine collapses requires use of relatively high frequency seismic waveforms which are strongly affected by small-scale heterogeneities in the crust. In this study, we developed a new inversion method called gCAP3D for determining general moment tensor of a seismic source using Green's functions of 3D models. It inherits the advantageous features of the ;Cut-and-Paste; (CAP) method to break a full seismogram into the Pnl and surface-wave segments and to allow time shift between observed and predicted waveforms. It uses grid search for 5 source parameters (relative strengths of the isotropic and compensated-linear-vector-dipole components and the strike, dip, and rake of the double-couple component) that minimize the waveform misfit. The scalar moment is estimated using the ratio of L2 norms of the data and synthetics. Focal depth can also be determined by repeating the inversion at different depths. We applied gCAP3D to the 2013 Ms 7.0 Lushan earthquake and its aftershocks using a 3D crustal-upper mantle velocity model derived from ambient noise tomography in the region. We first relocated the events using the double-difference method. We then used the finite-differences method and reciprocity principle to calculate Green's functions of the 3D model for 20 permanent broadband seismic stations within 200 km from the source region. We obtained moment tensors of the mainshock and 74 aftershocks ranging from Mw 5.2 to 3.4. The results show that the Lushan earthquake is a reverse faulting at a depth of 13-15 km on a plane dipping 40-47° to N46° W. Most of the aftershocks occurred off the main rupture plane and have similar focal mechanisms to the mainshock's, except in the proximity of the mainshock where the aftershocks' focal mechanisms display some variations.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Discrete Inverse and State Estimation Problems
NASA Astrophysics Data System (ADS)
Wunsch, Carl
2006-06-01
The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines. This book addresses these problems using examples taken from geophysical fluid dynamics. It focuses on discrete formulations, both static and time-varying, known variously as inverse, state estimation or data assimilation problems. Starting with fundamental algebraic and statistical ideas, the book guides the reader through a range of inference tools including the singular value decomposition, Gauss-Markov and minimum variance estimates, Kalman filters and related smoothers, and adjoint (Lagrange multiplier) methods. The final chapters discuss a variety of practical applications to geophysical flow problems. Discrete Inverse and State Estimation Problems is an ideal introduction to the topic for graduate students and researchers in oceanography, meteorology, climate dynamics, and geophysical fluid dynamics. It is also accessible to a wider scientific audience; the only prerequisite is an understanding of linear algebra. Provides a comprehensive introduction to discrete methods of inference from incomplete information Based upon 25 years of practical experience using real data and models Develops sequential and whole-domain analysis methods from simple least-squares Contains many examples and problems, and web-based support through MIT opencourseware
Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M
2007-01-01
We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.
Clark, Margaret S; Lemay, Edward P; Graham, Steven M; Pataki, Sherri P; Finkel, Eli J
2010-07-01
Couples reported on bases for giving support and on relationship satisfaction just prior to and approximately 2 years into marriage. Overall, a need-based, noncontingent (communal) norm was seen as ideal and was followed, and greater use of this norm was linked to higher relationship satisfaction. An exchange norm was seen as not ideal and was followed significantly less frequently than was a communal norm; by 2 years into marriage, greater use of an exchange norm was linked with lower satisfaction. Insecure attachment predicted greater adherence to an exchange norm. Idealization of and adherence to a communal norm dropped slightly across time. As idealization of a communal norm and own use and partner use of a communal norm decreased, people high in avoidance increased their use of an exchange norm, whereas people low in avoidance decreased their use of an exchange norm. Anxious individuals evidenced tighter links between norm use and marital satisfaction relative to nonanxious individuals. Overall, a picture of people valuing a communal norm and striving toward adherence to a communal norm emerged, with secure individuals doing so with more success and equanimity across time than insecure individuals.
Mask manufacturing of advanced technology designs using multi-beam lithography (Part 1)
NASA Astrophysics Data System (ADS)
Green, Michael; Ham, Young; Dillon, Brian; Kasprowicz, Bryan; Hur, Ik Boum; Park, Joong Hee; Choi, Yohan; McMurran, Jeff; Kamberian, Henry; Chalom, Daniel; Klikovits, Jan; Jurkovic, Michal; Hudek, Peter
2016-10-01
As optical lithography is extended into 10nm and below nodes, advanced designs are becoming a key challenge for mask manufacturers. Techniques including advanced Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) result in structures that pose a range of issues across the mask manufacturing process. Among the new challenges are continued shrinking Sub-Resolution Assist Features (SRAFs), curvilinear SRAFs, and other complex mask geometries that are counter-intuitive relative to the desired wafer pattern. Considerable capability improvements over current mask making methods are necessary to meet the new requirements particularly regarding minimum feature resolution and pattern fidelity. Advanced processes using the IMS Multi-beam Mask Writer (MBMW) are feasible solutions to these coming challenges. In this paper, we study one such process, characterizing mask manufacturing capability of 10nm and below structures with particular focus on minimum resolution and pattern fidelity.
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
Direct statistical modeling and its implications for predictive mapping in mining exploration
NASA Astrophysics Data System (ADS)
Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila
2010-05-01
Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.
NASA Astrophysics Data System (ADS)
Ialongo, S.; Cella, F.; Fedi, M.; Florio, G.
2011-12-01
Most geophysical inversion problems are characterized by a number of data considerably higher than the number of the unknown parameters. This corresponds to solve highly underdetermined systems. To get a unique solution, a priori information must be therefore introduced. We here analyze the inversion of the gravity gradient tensor (GGT). Previous approaches to invert jointly or independently more gradient components are by Li (2001) proposing an algorithm using a depth weighting function and Zhdanov et alii (2004), providing a well focused inversion of gradient data. Both the methods give a much-improved solution compared with the minimum length solution, which is invariably shallow and not representative of the true source distribution. For very undetermined problems, this feature is due to the role of the depth weighting matrices used by both the methods. Recently, Cella and Fedi (2011) showed however that for magnetic and gravity data the depth weighting function has to be defined carefully, under a preliminary application of Euler Deconvolution or Depth from Extreme Point methods, yielding the appropriate structural index and then using it as the rate decay of the weighting function. We therefore propose to extend this last approach to invert jointly or independently the GGT tensor using the structural index as weighting function rate decay. In case of a joint inversion, gravity data can be added as well. This multicomponent case is also relevant because the simultaneous use of several components and gravity increase the number of data and reduce the algebraic ambiguity compared to the inversion of a single component. The reduction of such ambiguity was shown in Fedi et al, (2005) decisive to get an improved depth resolution in inverse problems, independently from any form of depth weighting function. The method is demonstrated to synthetic cases and applied to real cases, such as the Vredefort impact area (South Africa), characterized by a complex density distribution, well defining a central uplift area, ring structures and low density sediments. REFERENCES Cella F., and Fedi M., 2011, Inversion of potential field data using the structural index as weighting function rate decay, Geophysical Prospecting, doi: 10.1111/j.1365-2478.2011.00974.x Fedi M., Hansen P. C., and Paoletti V., 2005 Analysis of depth resolution in potential-field inversion. Geophysics, 70, NO. 6 Li, Y., 2001, 3-D inversion of gravity gradiometry data: 71st Annual Meeting, SEG, Expanded Abstracts, 1470-1473. Zhdanov, M. S., Ellis, R. G., and Mukherjee, S., 2004, Regularized focusing inversion of 3-D gravity tensor data: Geophysics, 69, 925-937.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011).
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NO x , PM 10 , SO 2 , O 3 , and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011)
NASA Astrophysics Data System (ADS)
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NOx, PM10, SO2, O3, and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
NASA Astrophysics Data System (ADS)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vimont, Daniel
This project funded two efforts at understanding the interactions between Central Pacific ENSO events, the mid-latitude atmosphere, and decadal variability in the Pacific. The first was an investigation of conditions that lead to Central Pacific (CP) and East Pacific (EP) ENSO events through the use of linear inverse modeling with defined norms. The second effort was a modeling study that combined output from the National Center for Atmospheric Research (NCAR) Community Atmospheric Model (CAM4) with the Battisti (1988) intermediate coupled model. The intent of the second activity was to investigate the relationship between the atmospheric North Pacific Oscillation (NPO), themore » Pacific Meridional Mode (PMM), and ENSO. These two activities are described herein.« less
NASA Technical Reports Server (NTRS)
Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.
1987-01-01
In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
Using Peer Injunctive Norms to Predict Early Adolescent Cigarette Smoking Intentions
Zaleski, Adam C.; Aloise-Young, Patricia A.
2013-01-01
The present study investigated the importance of the perceived injunctive norm to predict early adolescent cigarette smoking intentions. A total of 271 6th graders completed a survey that included perceived prevalence of friend smoking (descriptive norm), perceptions of friends’ disapproval of smoking (injunctive norm), and future smoking intentions. Participants also listed their five best friends, in which the actual injunctive norm was calculated. Results showed that smoking intentions were significantly correlated with the perceived injunctive norm but not with the actual injunctive norm. Secondly, the perceived injunctive norm predicted an additional 3.4% of variance in smoking intentions above and beyond the perceived descriptive norm. These results demonstrate the importance of the perceived injunctive norm in predicting early adolescent smoking intentions. PMID:24078745
Social norms and their influence on eating behaviours.
Higgs, Suzanne
2015-03-01
Social norms are implicit codes of conduct that provide a guide to appropriate action. There is ample evidence that social norms about eating have a powerful effect on both food choice and amounts consumed. This review explores the reasons why people follow social eating norms and the factors that moderate norm following. It is proposed that eating norms are followed because they provide information about safe foods and facilitate food sharing. Norms are a powerful influence on behaviour because following (or not following) norms is associated with social judgements. Norm following is more likely when there is uncertainty about what constitutes correct behaviour and when there is greater shared identity with the norm referent group. Social norms may affect food choice and intake by altering self-perceptions and/or by altering the sensory/hedonic evaluation of foods. The same neural systems that mediate the rewarding effects of food itself are likely to reinforce the following of eating norms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reference genes for reverse transcription quantitative PCR in canine brain tissue.
Stassen, Quirine E M; Riemers, Frank M; Reijmerink, Hannah; Leegwater, Peter A J; Penning, Louis C
2015-12-09
In the last decade canine models have been used extensively to study genetic causes of neurological disorders such as epilepsy and Alzheimer's disease and unravel their pathophysiological pathways. Reverse transcription quantitative polymerase chain reaction is a sensitive and inexpensive method to study expression levels of genes involved in disease processes. Accurate normalisation with stably expressed so-called reference genes is crucial for reliable expression analysis. Following the minimum information for publication of quantitative real-time PCR experiments precise guidelines, the expression of ten frequently used reference genes, namely YWHAZ, HMBS, B2M, SDHA, GAPDH, HPRT, RPL13A, RPS5, RPS19 and GUSB was evaluated in seven brain regions (frontal lobe, parietal lobe, occipital lobe, temporal lobe, thalamus, hippocampus and cerebellum) and whole brain of healthy dogs. The stability of expression varied between different brain areas. Using the GeNorm and Normfinder software HMBS, GAPDH and HPRT were the most reliable reference genes for whole brain. Furthermore based on GeNorm calculations it was concluded that as little as two to three reference genes are sufficient to obtain reliable normalisation, irrespective the brain area. Our results amend/extend the limited previously published data on canine brain reference genes. Despite the excellent expression stability of HMBS, GAPDH and HRPT, the evaluation of expression stability of reference genes must be a standard and integral part of experimental design and subsequent data analysis.
Robot map building based on fuzzy-extending DSmT
NASA Astrophysics Data System (ADS)
Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun
2007-11-01
With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.
Muslim women's narratives about bodily change and care during critical illness: a qualitative study.
Zeilani, Ruqayya; Seymour, Jane E
2012-03-01
To explore experiences of Jordanian Muslim women in relation to bodily change during critical illness. A longitudinal narrative approach was used. A purposive sample of 16 Jordanian women who had spent a minimum of 48 hr in intensive care participated in one to three interviews over a 6-month period. Three main categories emerged from the analysis: the dependent body reflects changes in the women's bodily strength and performance, as they moved from being care providers into those in need of care; this was associated with experiences of a sense of paralysis, shame, and burden. The social body reflects the essential contribution that family help or nurses' support (as a proxy for family) made to women's adjustment to bodily change and their ability to make sense of their illness. The cultural body reflects the effect of cultural norms and Islamic beliefs on the women's interpretation of their experiences and relates to the women's understandings of bodily modesty. This study illustrates, by in-depth focus on Muslim women's narratives, the complex interrelationship between religious beliefs, cultural norms, and the experiences and meanings of bodily changes during critical illness. This article provides insights into vital aspects of Muslim women's needs and preferences for nursing care. It highlights the importance of including an assessment of culture and spiritual aspects when nursing critically ill patients. © 2011 Sigma Theta Tau International.
Siupsinskiene, Nora; Lycke, Hugo
2011-07-01
This prospective cross-sectional study examines the effects of voice training on vocal capabilities in vocally healthy age and gender differentiated groups measured by voice range profile (VRP) and speech range profile (SRP). Frequency and intensity measurements of the VRP and SRP using standard singing and speaking voice protocols were derived from 161 trained choir singers (21 males, 59 females, and 81 prepubescent children) and from 188 nonsingers (38 males, 89 females, and 61 children). When compared with nonsingers, both genders of trained adult and child singers exhibited increased mean pitch range, highest frequency, and VRP area in high frequencies (P<0.05). Female singers and child singers also showed significantly increased mean maximum voice intensity, intensity range, and total VRP area. The logistic regression analysis showed that VRP pitch range, highest frequency, maximum voice intensity, and maximum-minimum intensity range, and SRP slope of speaking curve were the key predictors of voice training. Age, gender, and voice training differentiated norms of VRP and SRP parameters are presented. Significant positive effect of voice training on vocal capabilities, mostly singing voice, was confirmed. The presented norms for trained singers, with key parameters differentiated by gender and age, are suggested for clinical practice of otolaryngologists and speech-language pathologists. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
NASA Astrophysics Data System (ADS)
Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.
2016-04-01
Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
Feasible Muscle Activation Ranges Based on Inverse Dynamics Analyses of Human Walking
Simpson, Cole S.; Sohn, M. Hongchul; Allen, Jessica L.; Ting, Lena H.
2015-01-01
Although it is possible to produce the same movement using an infinite number of different muscle activation patterns owing to musculoskeletal redundancy, the degree to which observed variations in muscle activity can deviate from optimal solutions computed from biomechanical models is not known. Here, we examined the range of biomechanically permitted activation levels in individual muscles during human walking using a detailed musculoskeletal model and experimentally-measured kinetics and kinematics. Feasible muscle activation ranges define the minimum and maximum possible level of each muscle’s activation that satisfy inverse dynamics joint torques assuming that all other muscles can vary their activation as needed. During walking, 73% of the muscles had feasible muscle activation ranges that were greater than 95% of the total muscle activation range over more than 95% of the gait cycle, indicating that, individually, most muscles could be fully active or fully inactive while still satisfying inverse dynamics joint torques. Moreover, the shapes of the feasible muscle activation ranges did not resemble previously-reported muscle activation patterns nor optimal solutions, i.e. static optimization and computed muscle control, that are based on the same biomechanical constraints. Our results demonstrate that joint torque requirements from standard inverse dynamics calculations are insufficient to define the activation of individual muscles during walking in healthy individuals. Identifying feasible muscle activation ranges may be an effective way to evaluate the impact of additional biomechanical and/or neural constraints on possible versus actual muscle activity in both normal and impaired movements. PMID:26300401
NASA Astrophysics Data System (ADS)
Rocadenbosch, Francesc; Comeron, Adolfo; Vazquez, Gregori; Rodriguez-Gomez, Alejandro; Soriano, Cecilia; Baldasano, Jose M.
1998-12-01
Up to now, retrieval of the atmospheric extinction and backscatter has mainly relied on standard straightforward non-memory procedures such as slope-method, exponential- curve fitting and Klett's method. Yet, their performance becomes ultimately limited by the inherent lack of adaptability as they only work with present returns and neither past estimations, nor the statistics of the signals or a prior uncertainties are taken into account. In this work, a first inversion of the backscatter and extinction- to-backscatter ratio from pulsed elastic-backscatter lidar returns is tackled by means of an extended Kalman filter (EKF), which overcomes these limitations. Thus, as long as different return signals income,the filter updates itself weighted by the unbalance between the a priori estimates of the optical parameters and the new ones based on a minimum variance criterion. Calibration errors or initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables to retrieve the sought-after optical parameters as time-range-dependent functions and hence, to track the atmospheric evolution, its performance being only limited by the quality and availability of the 'a priori' information and the accuracy of the atmospheric model assumed. The study ends with an encouraging practical inversion of a live-scene measured with the Nd:YAG elastic-backscatter lidar station at our premises in Barcelona.
NASA Astrophysics Data System (ADS)
Croitoru, Madalina; Oren, Nir; Miles, Simon; Luck, Michael
Norms impose obligations, permissions and prohibitions on individual agents operating as part of an organisation. Typically, the purpose of such norms is to ensure that an organisation acts in some socially (or mutually) beneficial manner, possibly at the expense of individual agent utility. In this context, agents are normaware if they are able to reason about which norms are applicable to them, and to decide whether to comply with or ignore them. While much work has focused on the creation of norm-aware agents, much less has been concerned with aiding system designers in understanding the effects of norms on a system. The ability to understand such norm effects can aid the designer in avoiding incorrect norm specification, eliminating redundant norms and reducing normative conflict. In this paper, we address the problem of norm understanding by providing explanations as to why a norm is applicable, violated, or in some other state. We make use of conceptual graph based semantics to provide a graphical representation of the norms within a system. Given knowledge of the current and historical state of the system, such a representation allows for explanation of the state of norms, showing for example why they may have been activated or violated.
An integrated approach to evaluate the Aji-Chai potash resources in Iran using potential field data
NASA Astrophysics Data System (ADS)
Abedi, Maysam
2018-03-01
This work presents an integrated application of potential field data to discover potash-bearing evaporite sources in Aji-Chai salt deposit, located in east Azerbaijan province, northwest of Iran. Low density and diamagnetic effect of salt minerals, i.e. potash, give rise to promising potential field anomalies that assist to localize sought blind targets. The halokinetic-type potash-bearing salts in the prospect zone have flowed upward and intruded into surrounded sedimentary sequences dominated frequently by marl, gypsum and alluvium terraces. Processed gravity and magnetic data delineated a main potash source with negative gravity and magnetic amplitude responses. To better visualize these evaporite deposits, 3D model of density contrast and magnetic susceptibility was constructed through constrained inversion of potential field data. A mixed-norm regularization technique was taken into account to generate sharp and compact geophysical models. Since tectonic pressure causes vertical movement of the potash in the studied region, a simple vertical cylindrical shape is an appropriate geometry to simulate these geological targets. Therefore, structural index (i.e. decay rate of potential field amplitude with distance) of such assumed source was embedded in the inversion program as a geometrical constraint to image these geologically plausible sources. In addition, the top depth of the main and the adjacent sources were estimated 39 m and 22 m, respectively, via the combination of the analytic signal and the Euler deconvolution techniques. Drilling result also indicated that the main source of potash starts at a depth of 38 m. The 3D models of the density contrast and the magnetic susceptibility (assuming a superficial sedimentary cover as a hard constraint in the inversion algorithm) demonstrated that potash source has an extension in depth less than 150 m.
Reid, Allecia E.; Taber, Jennifer M.; Ferrer, Rebecca A.; Biesecker, Barbara B.; Lewis, Katie L.; Biesecker, Leslie G.; Klein, William M. P.
2018-01-01
Objective Genomic sequencing is becoming increasingly accessible, highlighting the need to understand the social and psychological factors that drive interest in receiving testing results. These decisions may depend on perceived descriptive norms (how most others behave) and injunctive norms (what is approved of by others). We predicted that descriptive norms would be directly associated with intentions to learn genomic sequencing results, whereas injunctive norms would be associated indirectly, via attitudes. These differential associations with intentions versus attitudes were hypothesized to be strongest when individuals held ambivalent attitudes toward obtaining results. Methods Participants enrolled in a genomic sequencing trial (n=372) reported intentions to learn medically actionable, non-medically actionable, and carrier sequencing results. Descriptive norms items referenced other study participants. Injunctive norms were analyzed separately for close friends and family members. Attitudes, attitudinal ambivalence, and sociodemographic covariates were also assessed. Results In structural equation models, both descriptive norms and friend injunctive norms were associated with intentions to receive all sequencing results (ps<.004). Attitudes consistently mediated all friend injunctive norms-intentions associations, but not the descriptive norms-intentions associations. Attitudinal ambivalence moderated the association between friend injunctive norms (p≤.001), but not descriptive norms (p=.16), and attitudes. Injunctive norms were significantly associated with attitudes when ambivalence was high, but were unrelated when ambivalence was low. Results replicated for family injunctive norms. Conclusions Descriptive and injunctive norms play roles in genomic sequencing decisions. Considering mediators and moderators of these processes enhances ability to optimize use of normative information to support informed decision making. PMID:29745680
Injunctive Norms and Alcohol Consumption: A Revised Conceptualization
Krieger, Heather; Neighbors, Clayton; Lewis, Melissa A.; LaBrie, Joseph W.; Foster, Dawn W.; Larimer, Mary E.
2016-01-01
Background Injunctive norms have been found to be important predictors of behaviors in many disciplines with the exception of alcohol research. This exception is likely due to a misconceptualization of injunctive norms for alcohol consumption. To address this, we outline and test a new conceptualization of injunctive norms and personal approval for alcohol consumption. Traditionally, injunctive norms have been assessed using Likert scale ratings of approval perceptions, whereas descriptive norms and individual behaviors are typically measured with behavioral estimates (i.e., number of drinks consumed per week, frequency of drinking, etc.). This makes comparisons between these constructs difficult because they are not similar conceptualizations of drinking behaviors. The present research evaluated a new representation of injunctive norms with anchors comparable to descriptive norms measures. Methods A study and a replication were conducted including 2,559 and 1,189 undergraduate students from three different universities. Participants reported on their alcohol-related consumption behaviors, personal approval of drinking, and descriptive and injunctive norms. Personal approval and injunctive norms were measured using both traditional measures and a new drink-based measure. Results Results from both studies indicated that drink-based injunctive norms were uniquely and positively associated with drinking whereas traditionally assessed injunctive norms were negatively associated with drinking. Analyses also revealed significant unique associations between drink-based injunctive norms and personal approval when controlling for descriptive norms. Conclusions These findings provide support for a modified conceptualization of personal approval and injunctive norms related to alcohol consumption and, importantly, offers an explanation and practical solution for the small and inconsistent findings related to injunctive norms and drinking in past studies. PMID:27030295
NASA Astrophysics Data System (ADS)
Huhn, Stefan; Peeling, Derek; Burkart, Maximilian
2017-10-01
With the availability of die face design tools and incremental solver technologies to provide detailed forming feasibility results in a timely fashion, the use of inverse solver technologies and resulting process improvements during the product development process of stamped parts often is underestimated. This paper presents some applications of inverse technologies that are currently used in the automotive industry to streamline the product development process and greatly increase the quality of a developed process and the resulting product. The first focus is on the so-called target strain technology. Application examples will show how inverse forming analysis can be applied to support the process engineer during the development of a die face geometry for Class `A' panels. The drawing process is greatly affected by the die face design and the process designer has to ensure that the resulting drawn panel will meet specific requirements regarding surface quality and a minimum strain distribution to ensure dent resistance. The target strain technology provides almost immediate feedback to the process engineer during the die face design process if a specific change of the die face design will help to achieve these specific requirements or will be counterproductive. The paper will further show how an optimization of the material flow can be achieved through the use of a newly developed technology called Sculptured Die Face (SDF). The die face generation in SDF is more suited to be used in optimization loops than any other conventional die face design technology based on cross section design. A second focus in this paper is on the use of inverse solver technologies for secondary forming operations. The paper will show how the application of inverse technology can be used to accurately and quickly develop trim lines on simple as well as on complex support geometries.
Yang, Bo
2018-06-01
Based on the theory of normative social behavior (Rimal & Real, 2005), this study examined the effects of descriptive norms, close versus distal peer injunctive norms, and interdependent self-construal on college students' intentions to consume alcohol. Results of a cross-sectional study conducted among U.S. college students (N = 581) found that descriptive norms, close, and distal peer injunctive norms had independent effects on college students' intentions to consume alcohol. Furthermore, close peer injunctive norms moderated the effects of descriptive norms on college students' intentions to consume alcohol and the interaction showed different patterns among students with a strong and weak interdependent self-construal. High levels of close peer injunctive norms weakened the relationship between descriptive norms and intentions to consume alcohol among students with a strong interdependent self-construal but strengthened the relationship between descriptive norms and intentions to consume alcohol among students with a weak interdependent self-construal. Implications of the findings for norms-based research and college drinking interventions are discussed.
Reconstructing the duty of water: a study of emergent norms in socio-hydrology
NASA Astrophysics Data System (ADS)
Wescoat, J. L., Jr.
2013-12-01
This paper assesses the changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, or more precisely the history of socio-hydrologic ideas, a line of research that is useful for interpreting and anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late 18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. The duty of water concept in this context articulated a form of political power, as did related irrigation engineering concepts such as "command" and "regime". Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for private water rights appropriation (e.g., only 40 to 80 acres per cfs). While both concepts of duty addressed socio-economic values associated with irrigation, the western US linked duty with justifications for, and limits of, water ownership. The final sections show that while the duty of water concept has been eclipsed in practice by other measures, standards, and values of water use efficiency, it has continuing relevance for examining ethical duties and for anticipating, if not predicting, emerging social values with respect to water.
Foster, Dawn W.; Neighbors, Clayton; Krieger, Heather
2015-01-01
Objectives This study assessed descriptive and injunctive norms, evaluations of alcohol consequences, and acceptability of drinking. Methods Participants were 248 heavy-drinking undergraduates (81.05% female; Mage = 23.45). Results Stronger perceptions of descriptive and injunctive norms for drinking and more positive evaluations of alcohol consequences were positively associated with drinking and the number of drinks considered acceptable. Descriptive and injunctive norms interacted, indicating that injunctive norms were linked with number of acceptable drinks among those with higher descriptive norms. Descriptive norms and evaluations of consequences interacted, indicating that descriptive norms were positively linked with number of acceptable drinks among those with negative evaluations of consequences; however, among those with positive evaluations of consequences, descriptive norms were negatively associated with number of acceptable drinks. Injunctive norms and evaluations of consequences interacted, indicating that injunctive norms were positively associated with number of acceptable drinks, particularly among those with positive evaluations of consequences. A three-way interaction emerged between injunctive and descriptive norms and evaluations of consequences, suggesting that injunctive norms and the number of acceptable drinks were positively associated more strongly among those with negative versus positive evaluations of consequences. Those with higher acceptable drinks also had positive evaluations of consequences and were high in injunctive norms. Conclusions Findings supported hypotheses that norms and evaluations of alcohol consequences would interact with respect to drinking and acceptance of drinking. These examinations have practical utility and may inform development and implementation of interventions and programs targeting alcohol misuse among heavy drinking undergraduates. PMID:25437265
Extending the Mertonian Norms: Scientists' Subscription to Norms of Research
ERIC Educational Resources Information Center
Anderson, Melissa S.; Ronning, Emily A.; De Vries, Raymond; Martinson, Brian C.
2010-01-01
This analysis, based on focus groups and a national survey, assesses scientists' subscription to the Mertonian norms of science and associated counternorms. It also supports extension of these norms to governance (as opposed to administration), as a norm of decision-making, and quality (as opposed to quantity), as an evaluative norm. (Contains 1…
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
Verbeke, Peter; Vermeulen, Gert; Meysman, Michaël; Vander Beken, Tom
2015-01-01
Using the new legal basis provided by the Lisbon Treaty, the Council of the European Union has endorsed the 2009 Procedural Roadmap for strengthening the procedural rights of suspected or accused persons in criminal proceedings. This Roadmap has so far resulted in six measures from which specific procedural minimum standards have been and will be adopted or negotiated. So far, only Measure E directly touches on the specific issue of vulnerable persons. This Measure has recently produced a tentative result through a Commission Recommendation on procedural safeguards for vulnerable persons in criminal proceedings. This contribution aims to discuss the need for the introduction of binding minimum standards throughout Europe to provide additional protection for mentally disordered defendants. The paper will examine whether or not the member states adhere to existing fundamental norms and standards in this context, and whether the application of these norms and standards should be made more uniform. For this purpose, the procedural situation of mentally disordered defendants in Belgium and England and Wales will be thoroughly explored. The research establishes that Belgian law is unsatisfactory in the light of the Strasbourg case law, and that the situation in practice in England and Wales indicates not only that there is justifiable doubt about whether fundamental principles are always adhered to, but also that these principles should become more anchored in everyday practice. It will therefore be argued that there is a need for putting Measure E into practice. The Commission Recommendation, though only suggestive, may serve as a necessary and inspirational vehicle to improve the procedural rights of mentally disordered defendants and to ensure that member states are able to cooperate within the mutual recognition framework without being challenged on the grounds that they are collaborating with peers who do not respect defendants' fundamental fair trial rights. Throughout this contribution the term 'defendant' will be used, and no difference will be made in terminology between suspected and accused persons. This contribution only covers the situation of mentally disordered adult defendants. Copyright © 2015 Elsevier Ltd. All rights reserved.
Children are sensitive to norms of giving.
McAuliffe, Katherine; Raihani, Nichola J; Dunham, Yarrow
2017-10-01
People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were told how much other children give (descriptive norm) or what they should give according to an adult (injunctive norm). Results showed that children generally gave more when they were exposed to a generous norm. However, patterns of compliance varied with age. Younger children were more likely to comply with the selfish norm, suggesting a licensing effect. By contrast, older children were more influenced by the generous norm, yet capped their donations at 50%, perhaps adhering to a pre-existing norm of equality. Children were not differentially influenced by descriptive or injunctive norms, suggesting a primacy of norm content over norm format. Together, our findings indicate that while generosity is malleable in children, normative information does not completely override pre-existing biases. Copyright © 2017 Elsevier B.V. All rights reserved.
The application of vector concepts on two skew lines
NASA Astrophysics Data System (ADS)
Alghadari, F.; Turmudi; Herman, T.
2018-01-01
The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.
An advanced algorithm for deformation estimation in non-urban areas
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-09-01
This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
The advantages of logarithmically scaled data for electromagnetic inversion
NASA Astrophysics Data System (ADS)
Wheelock, Brent; Constable, Steven; Key, Kerry
2015-06-01
Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.
ERIC Educational Resources Information Center
Lee, Hyegyu; Paek, Hye-Jin
2013-01-01
Objective: To examine how norm appeals and guilt influence smokers' behavioural intention. Design: Quasi-experimental design. Setting: South Korea. Method: Two hundred and fifty-five male smokers were randomly assigned to descriptive, injunctive, or subjective anti-smoking norm messages. After they viewed the norm messages, their norm perceptions,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirro, G.A.
1997-02-01
This paper presents an overview of issues related to handling NORM materials, and provides a description of a facility designed for the processing of NORM contaminated equipment. With regard to handling NORM materials the author discusses sources of NORM, problems, regulations and disposal options, potential hazards, safety equipment, and issues related to personnel protection. For the facility, the author discusses: description of the permanent facility; the operations of the facility; the license it has for handling specific radioactive material; operating and safety procedures; decontamination facilities on site; NORM waste processing capabilities; and offsite NORM services which are available.
A case study on the formation and sharing process of science classroom norms
NASA Astrophysics Data System (ADS)
Chang, Jina; Song, Jinwoong
2016-03-01
The teaching and learning of science in school are influenced by various factors, including both individual factors, such as member beliefs, and social factors, such as the power structure of the class. To understand this complex context affected by various factors in schools, we investigated the formation and sharing process of science classroom norms in connection with these factors. By examining the developmental process of science classroom norms, we identified how the norms were realized, shared, and internalized among the members. We collected data through classroom observations and interviews focusing on two elementary science classrooms in Korea. From these data, factors influencing norm formation were extracted and developed as stories about norm establishment. The results indicate that every science classroom norm was established, shared, and internalized differently according to the values ingrained in the norms, the agent of norm formation, and the members' understanding about the norm itself. The desirable norms originating from values in science education, such as having an inquiring mind, were not established spontaneously by students, but were instead established through well-organized norm networks to encourage concrete practice. Educational implications were discussed in terms of the practice of school science inquiry, cultural studies, and value-oriented education.
Jacobson, Ryan P; Mortensen, Chad R; Cialdini, Robert B
2011-03-01
The authors suggest that injunctive and descriptive social norms engage different psychological response tendencies when made selectively salient. On the basis of suggestions derived from the focus theory of normative conduct and from consideration of the norms' functions in social life, the authors hypothesized that the 2 norms would be cognitively associated with different goals, would lead individuals to focus on different aspects of self, and would stimulate different levels of conflict over conformity decisions. Additionally, a unique role for effortful self-regulation was hypothesized for each type of norm-used as a means to resist conformity to descriptive norms but as a means to facilitate conformity for injunctive norms. Four experiments supported these hypotheses. Experiment 1 demonstrated differences in the norms' associations to the goals of making accurate/efficient decisions and gaining/maintaining social approval. Experiment 2 provided evidence that injunctive norms lead to a more interpersonally oriented form of self-awareness and to a greater feeling of conflict about conformity decisions than descriptive norms. In the final 2 experiments, conducted in the lab (Experiment 3) and in a naturalistic environment (Experiment 4), self-regulatory depletion decreased conformity to an injunctive norm (Experiments 3 and 4) and increased conformity to a descriptive norm (Experiment 4)-even though the norms advocated identical behaviors. By illustrating differentiated response tendencies for each type of social norm, this research provides new and converging support for the focus theory of normative conduct. (c) 2011 APA, all rights reserved
ERIC Educational Resources Information Center
Gorgorio, Nuria; Planas, Nuria
2005-01-01
Starting from the constructs "cultural scripts" and "social representations", and on the basis of the empirical research we have been developing until now, we revisit the construct norms from a sociocultural perspective. Norms, both sociomathematical norms and norms of the mathematical practice, as cultural scripts influenced…
ERIC Educational Resources Information Center
McGuire, Luke; Rutland, Adam; Nesdale, Drew
2015-01-01
The present study examined the interactive effects of school norms, peer norms, and accountability on children's intergroup attitudes. Participants (n = 229) aged 5-11 years, in a between-subjects design, were randomly assigned to a peer group with an inclusion or exclusion norm, learned their school either had an inclusion norm or not, and were…
Rotation and rotation-vibration spectroscopy of the 0+-0- inversion doublet in deuterated cyanamide.
Kisiel, Zbigniew; Kraśnicki, Adam; Jabs, Wolfgang; Herbst, Eric; Winnewisser, Brenda P; Winnewisser, Manfred
2013-10-03
The pure rotation spectrum of deuterated cyanamide was recorded at frequencies from 118 to 649 GHz, which was complemented by measurement of its high-resolution rotation-vibration spectrum at 8-350 cm(-1). For D2NCN the analysis revealed considerable perturbations between the lowest Ka rotational energy levels in the 0(+) and 0(-) substates of the lowest inversion doublet. The final data set for D2NCN exceeded 3000 measured transitions and was successfully fitted with a Hamiltonian accounting for the 0(+) ↔ 0(-) coupling. A smaller data set, consisting only of pure rotation and rotation-vibration lines observed with microwave techniques was obtained for HDNCN, and additional transitions of this type were also measured for H2NCN. The spectroscopic data for all three isotopic species were fitted with a unified, robust Hamiltonian allowing confident prediction of spectra well into the terahertz frequency region, which is of interest to contemporary radioastronomy. The isotopic dependence of the determined inversion splitting, ΔE = 16.4964789(8), 32.089173(3), and 49.567770(6) cm(-1), for D2NCN, HDNCN, and H2NCN, respectively, is found to be in good agreement with estimates from a simple reduced quartic-quadratic double minimum potential.
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Current Trends in the study of Gender Norms and Health Behaviors
Fleming, Paul J.; Agnew-Brune, Christine
2015-01-01
Gender norms are recognized as one of the major social determinants of health and gender norms can have implications for an individual’s health behaviors. This paper reviews the recent advances in research on the role of gender norms on health behaviors most associated with morbidity and mortality. We find that (1) the study of gender norms and health behaviors is varied across different types of health behaviors, (2) research on masculinity and masculine norms appears to have taken on an increasing proportion of studies on the relationship between gender norms and health, and (3) we are seeing new and varied populations integrated into the study of gender norms and health behaviors. PMID:26075291
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
High resolution beamforming on large aperture vertical line arrays: Processing synthetic data
NASA Astrophysics Data System (ADS)
Tran, Jean-Marie Q.; Hodgkiss, William S.
1990-09-01
This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
Rheodynamic model of cardiac pressure pulsations.
Petrov, V G; Nikolov, S G
1999-03-15
To analyse parametrically (in terms of the qualitative theory of dynamical systems) the mechanical influence of inertia, resistance (positive and negative), elasticity and other global properties of the heart-muscle on the left ventricular pressure, an active rheodynamic model based on the Newtons's principles is proposed. The equation of motion of the heart mass centre is derived from an energy conservation law balancing the rate of mechanical (kinetic and potential) energy variation and the power of chemical energy influx and dissipative energy outflux. A corresponding dynamical system of two ordinary differential equations is obtained and parametrically analysed in physiological conditions. As a result, the following main conclusion is made: in physiological norm, because of the heart electrical activity, its equilibrium state is unstable and around it, mechanical self-oscillations emerge. In case the electrical activity ceases, an inverse phase reconstruction occurs during which the unstable equilibrium state of the system becomes stable and the self-oscillations disappear.
Wartenburger, Isabell; Mériau, Katja; Scheibe, Christina; Goodenough, Oliver R.; Villringer, Arno; van der Meer, Elke; Heekeren, Hauke R.
2008-01-01
To investigate how individual differences in moral judgment competence are reflected in the human brain, we used event-related functional magnetic resonance imaging, while 23 participants made either socio-normative or grammatical judgments. Participants with lower moral judgment competence recruited the left ventromedial prefrontal cortex and the left posterior superior temporal sulcus more than participants with greater competence in this domain when identifying social norm violations. Moreover, moral judgment competence scores were inversely correlated with activity in the right dorsolateral prefrontal cortex (DLPFC) during socio-normative relative to grammatical judgments. Greater activity in right DLPFC in participants with lower moral judgment competence indicates increased recruitment of rule-based knowledge and its controlled application during socio-normative judgments. These data support current models of the neurocognition of morality according to which both emotional and cognitive components play an important role. PMID:19015093
Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-01-01
Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.
The Evolution and Discharge of Electric Fields within a Thunderstorm
NASA Astrophysics Data System (ADS)
Hager, William W.; Nisbet, John S.; Kasha, John R.
1989-05-01
A 3-dimensional electrical model for a thunderstorm is developed and finite difference approximations to the model are analyzed. If the spatial derivatives are approximated by a method akin to the ☐ scheme and if the temporal derivative is approximated by either a backward difference or the Crank-Nicholson scheme, we show that the resulting discretization is unconditionally stable. The forward difference approximation to the time derivative is stable when the time step is sufficiently small relative to the ratio between the permittivity and the conductivity. Max-norm error estimates for the discrete approximations are established. To handle the propagation of lightning, special numerical techniques are devised based on the Inverse Matrix Modification Formula and Cholesky updates. Numerical comparisons between the model and theoretical results of Wilson and Holzer-Saxon are presented. We also apply our model to a storm observed at the Kennedy Space Center on July 11, 1978.
Approximate equiangular tight frames for compressed sensing and CDMA applications
NASA Astrophysics Data System (ADS)
Tsiligianni, Evaggelia; Kondi, Lisimachos P.; Katsaggelos, Aggelos K.
2017-12-01
Performance guarantees for recovery algorithms employed in sparse representations, and compressed sensing highlights the importance of incoherence. Optimal bounds of incoherence are attained by equiangular unit norm tight frames (ETFs). Although ETFs are important in many applications, they do not exist for all dimensions, while their construction has been proven extremely difficult. In this paper, we construct frames that are close to ETFs. According to results from frame and graph theory, the existence of an ETF depends on the existence of its signature matrix, that is, a symmetric matrix with certain structure and spectrum consisting of two distinct eigenvalues. We view the construction of a signature matrix as an inverse eigenvalue problem and propose a method that produces frames of any dimensions that are close to ETFs. Due to the achieved equiangularity property, the so obtained frames can be employed as spreading sequences in synchronous code-division multiple access (s-CDMA) systems, besides compressed sensing.
Evolution of 3D electron density of the solar corona from the minimum to maximum of Solar Cycle 24
NASA Astrophysics Data System (ADS)
Wang, Tongjiang; Reginald, Nelson L.; Davila, Joseph M.; St Cyr, O. C.
2016-10-01
The variability of the solar white-light corona and its connection to the solar activity has been studied for more than a half century. It is widely accepted that the temporal variation of the total radiance of the K-corona follows the solar cycle pattern (e.g., correlated with sunspot number). However, the origin of this variation and its relationships with regard to coronal mass ejections and the solar wind are yet to be clearly understood. COR1-A and -B instruments onboard the STEREO spacecraft have continued to perform high-cadence (5 min) polarized brightness (pB) measurements from two different vantage points from the solar minimum to the solar maximum of Solar Cycle 24. With these pB observations we have reconstructed the 3D coronal density between 1.5-4.0 solar radii for 100 Carrington rotations (CRs) from 2007 to 2014 using the spherically symmetric inversion (SSI) method. We validate these 3D density reconstructions by other means such as tomography, MHD modeling, and pB inversion of LASCO/C2 data. We analyze the solar cycle variations of total coronal mass (or average density) over the global Sun and in two hemispheres, as well as the variations of the streamer area and mean density. We find the short-term oscillations of 8-9 CRs during the ascending and maximum phases through wavelet analysis. We explore the origin of these oscillations based on evolution of the photospheric magnetic flux and coronal structures.
Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study
NASA Astrophysics Data System (ADS)
Yang, Huachen; Zhang, Jianzhong
2018-06-01
In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.
An improved grey wolf optimizer algorithm for the inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui
2018-05-01
The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents.
Quigley, Jody; Rasmussen, Susan; McAlaney, John
2017-03-15
Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals' engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman's ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants' reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends' norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered.
The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents
Quigley, Jody; Rasmussen, Susan; McAlaney, John
2017-01-01
Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals’ engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman’s ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants’ reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends’ norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered. PMID:28294999
Performance of Dutch children on the Bayley III: a comparison study of US and Dutch norms.
Steenis, Leonie J P; Verhoeven, Marjolein; Hessen, Dave J; van Baar, Anneloes L
2015-01-01
The Bayley Scales of Infant and Toddler Development-third edition (Bayley-III) are frequently used to assess early child development worldwide. However, the original standardization only included US children, and it is still unclear whether or not these norms are adequate for use in other populations. Recently, norms for the Dutch version of the Bayley-III (The Bayley-III-NL) were made. Scores based on Dutch and US norms were compared to study the need for population-specific norms. Scaled scores based on Dutch and US norms were compared for 1912 children between 14 days and 42 months 14 days. Next, the proportions of children scoring < 1-SD and < -2 SD based on the two norms were compared, to identify over- or under-referral for developmental delay resulting from non-population-based norms. Scaled scores based on Dutch norms fluctuated around values based on US norms on all subtests. The extent of the deviations differed across ages and subtests. Differences in means were significant across all five subtests (p < .01) with small to large effect sizes (ηp2) ranging from .03 to .26). Using the US instead of Dutch norms resulted in over-referral regarding gross motor skills, and under-referral regarding cognitive, receptive communication, expressive communication, and fine motor skills. The Dutch norms differ from the US norms for all subtests and these differences are clinically relevant. Population specific norms are needed to identify children with low scores for referral and intervention, and to facilitate international comparisons of population data.
Emergence and Evolution of Cooperation Under Resource Pressure
Pereda, María; Zurro, Débora; Santos, José I.; Briz i Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M.
2017-01-01
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies. PMID:28362000
Which patients do I treat? An experimental study with economists and physicians
2012-01-01
This experiment investigates decisions made by prospective economists and physicians in an allocation problem which can be framed either medically or neutrally. The potential recipients differ with respect to their minimum needs as well as to how much they benefit from a treatment. We classify the allocators as either 'selfish', 'Rawlsian', or 'maximizing the number of recipients'. Economists tend to maximize their own payoff, whereas the physicians' choices are more in line with maximizing the number of recipients and with Rawlsianism. Regarding the framing, we observe that professional norms surface more clearly in familiar settings. Finally, we scrutinize how the probability of being served and the allocated quantity depend on a recipient's characteristics as well as on the allocator type. JEL Classification: A13, I19, C91, C72 PMID:22827912
Emergence and Evolution of Cooperation Under Resource Pressure.
Pereda, María; Zurro, Débora; Santos, José I; Briz I Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M
2017-03-31
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies.
Computation of Optimal Actuator/Sensor Locations
2013-12-26
weighting matrices Q = I and R = 0.01, and a minimum variance LQ-cost (with V = I ), a plot of the L2 norm of the control signal versus actuator...0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0.25 actuator location lin ea r− qu ad ra tic c os t ( re la tiv e) Q = I , R = 100 Q... I , R = 1 Q = I , R = 0.01 Q = I , R = 0.0001 (a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 actuator location lin
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Emergence and Evolution of Cooperation Under Resource Pressure
NASA Astrophysics Data System (ADS)
Pereda, María; Zurro, Débora; Santos, José I.; Briz I Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M.
2017-03-01
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Sparse Regression as a Sparse Eigenvalue Problem
NASA Technical Reports Server (NTRS)
Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai
2008-01-01
We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization
Professional Norms Guiding School Principals' Pedagogical Leadership
ERIC Educational Resources Information Center
Leo, Ulf
2015-01-01
Purpose: The purpose of this paper is to identify and analyze the professional norms surrounding school development, with a special emphasis on school principals' pedagogical leadership. Design/methodology/approach: A norm perspective is used to identify possible links between legal norms, professional norms, and actions. The findings are based on…
The hitchhiker's guide to altruism: gene-culture coevolution, and the internalization of norms.
Gintis, Herbert
2003-02-21
An internal norm is a pattern of behavior enforced in part by internal sanctions, such as shame, guilt and loss of self-esteem, as opposed to purely external sanctions, such as material rewards and punishment. The ability to internalize norms is widespread among humans, although in some so-called "sociopaths", this capacity is diminished or lacking. Suppose there is one genetic locus that controls the capacity to internalize norms. This model shows that if an internal norm is fitness enhancing, then for plausible patterns of socialization, the allele for internalization of norms is evolutionarily stable. This framework can be used to model Herbert Simon's (1990) explanation of altruism, showing that altruistic norms can "hitchhike" on the general tendency of internal norms to be personally fitness-enhancing. A multi-level selection, gene-culture coevolution argument then explains why individually fitness-reducing internal norms are likely to be prosocial as opposed to socially harmful.
Jeffrey, Jennifer; Whelan, Jodie; Pirouz, Dante M; Snowdon, Anne W
2016-07-01
Campaigns advocating behavioural changes often employ social norms as a motivating technique, favouring injunctive norms (what is typically approved or disapproved) over descriptive norms (what is typically done). Here, we investigate an upside to including descriptive norms in health and safety appeals. Because descriptive norms are easy to process and understand, they should provide a heuristic to guide behaviour in those individuals who lack the interest or motivation to reflect on the advocated behaviour more deeply. When those descriptive norms are positive - suggesting that what is done is consistent with what ought to be done - including them in campaigns should be particularly beneficial at influencing this low-involvement segment. We test this proposition via research examining booster seat use amongst parents with children of booster seat age, and find that incorporating positive descriptive norms into a related campaign is particularly impactful for parents who report low involvement in the topic of booster seat safety. Descriptive norms are easy to state and easy to understand, and our research suggests that these norms resonate with low involvement individuals. As a result, we recommend incorporating descriptive norms when possible into health and safety campaigns. Copyright © 2016. Published by Elsevier Ltd.
Cislaghi, Beniamino; Shakya, Holly
2018-03-01
Donors, practitioners and scholars are increasingly interested in harnessing the potential of social norms theory to improve adolescents' sexual and reproductive health outcomes. However, social norms theory is multifaceted, and its application in field interventions is complex. An introduction to social norms that will be beneficial for those who intend to integrate a social norms perspective in their work to improve adolescents' sexual health in Africa is presented. First three main schools of thought on social norms, looking at the theoretical standpoint of each, are discussed. Next, the difference between two important types of social norms (descriptive and injunctive) is explained and then the concept of a -reference group‖ is examined. The difference between social and gender norms are then considered, highlighting how this difference is motivated by existing yet contrasting approaches to norms (in social psychology and gender theory). In the last section, existing evidence on the role that social norms play in influencing adolescents' sexual and reproductive health are reviewed. Conclusions call for further research and action to understand how norms affecting adolescents' sexual and reproductive health and rights (SRHR) can be changed in sub-Saharan Africa.
Factors affecting minimum push and pull forces of manual carts.
Al-Eisawi, K W; Kerk, C J; Congleton, J J; Amendola, A A; Jenkins, O C; Gaines, W
1999-06-01
The minimum forces needed to manually push or pull a 4-wheel cart of differing weights with similar wheel sizes from a stationary state were measured on four floor materials under different conditions of wheel width, diameter, and orientation. Cart load was increased from 0 to 181.4 kg in increments of 36.3 kg. The floor materials were smooth concrete, tile, asphalt, and industrial carpet. Two wheel widths were tested: 25 and 38 mm. Wheel diameters were 51, 102, and 153 mm. Wheel orientation was tested at four levels: F0R0 (all four wheels aligned in the forward direction), F0R90 (the two front wheels, the wheels furthest from the cart handle, aligned in the forward direction and the two rear wheels, the wheels closest to the cart handle, aligned at 90 degrees to the forward direction), F90R0 (the two front wheels aligned at 90 degrees to the forward direction and the two rear wheels aligned in the forward direction), and F90R90 (all four wheels aligned at 90 degrees to the forward direction). Wheel width did not have a significant effect on the minimum push/pull forces. The minimum push/pull forces were linearly proportional to cart weight, and inversely proportional to wheel diameter. The coefficients of rolling friction were estimated as 2.2, 2.4, 3.3, and 4.5 mm for hard rubber wheels rolling on smooth concrete, tile, asphalt, and industrial carpet floors, respectively. The effect of wheel orientation was not consistent over the tested conditions, but, in general, the smallest minimum push/pull forces were measured with all four wheels aligned in the forward direction, whereas the largest minimum push/pull forces were measured when all four wheels were aligned at 90 degrees to the forward direction. There was no significant difference between the push and pull forces when all four wheels were aligned in the forward direction.
Brain responses to social norms: Meta-analyses of fMRI studies.
Zinchenko, Oksana; Arsalidou, Marie
2018-02-01
Social norms have a critical role in everyday decision-making, as frequent interaction with others regulates our behavior. Neuroimaging studies show that social-based and fairness-related decision-making activates an inconsistent set of areas, which sometimes includes the anterior insula, anterior cingulate cortex, and others lateral prefrontal cortices. Social-based decision-making is complex and variability in findings may be driven by socio-cognitive activities related to social norms. To distinguish among social-cognitive activities related to social norms, we identified 36 eligible articles in the functional magnetic resonance imaging (fMRI) literature, which we separate into two categories (a) social norm representation and (b) norm violations. The majority of original articles (>60%) used tasks associated with fairness norms and decision-making, such as ultimatum game, dictator game, or prisoner's dilemma; the rest used tasks associated to violation of moral norms, such as scenarios and sentences of moral depravity ratings. Using quantitative meta-analyses, we report common and distinct brain areas that show concordance as a function of category. Specifically, concordance in ventromedial prefrontal regions is distinct to social norm representation processing, whereas concordance in right insula, dorsolateral prefrontal, and dorsal cingulate cortices is distinct to norm violation processing. We propose a neurocognitive model of social norms for healthy adults, which could help guide future research in social norm compliance and mechanisms of its enforcement. © 2017 Wiley Periodicals, Inc.
Collective action and the evolution of social norm internalization
Gavrilets, Sergey; Richerson, Peter J.
2017-01-01
Human behavior is strongly affected by culturally transmitted norms and values. Certain norms are internalized (i.e., acting according to a norm becomes an end in itself rather than merely a tool in achieving certain goals or avoiding social sanctions). Humans’ capacity to internalize norms likely evolved in our ancestors to simplify solving certain challenges—including social ones. Here we study theoretically the evolutionary origins of the capacity to internalize norms. In our models, individuals can choose to participate in collective actions as well as punish free riders. In making their decisions, individuals attempt to maximize a utility function in which normative values are initially irrelevant but play an increasingly important role if the ability to internalize norms emerges. Using agent-based simulations, we show that norm internalization evolves under a wide range of conditions so that cooperation becomes “instinctive.” Norm internalization evolves much more easily and has much larger effects on behavior if groups promote peer punishment of free riders. Promoting only participation in collective actions is not effective. Typically, intermediate levels of norm internalization are most frequent but there are also cases with relatively small frequencies of “oversocialized” individuals willing to make extreme sacrifices for their groups no matter material costs, as well as “undersocialized” individuals completely immune to social norms. Evolving the ability to internalize norms was likely a crucial step on the path to large-scale human cooperation. PMID:28533363
Collective action and the evolution of social norm internalization.
Gavrilets, Sergey; Richerson, Peter J
2017-06-06
Human behavior is strongly affected by culturally transmitted norms and values. Certain norms are internalized (i.e., acting according to a norm becomes an end in itself rather than merely a tool in achieving certain goals or avoiding social sanctions). Humans' capacity to internalize norms likely evolved in our ancestors to simplify solving certain challenges-including social ones. Here we study theoretically the evolutionary origins of the capacity to internalize norms. In our models, individuals can choose to participate in collective actions as well as punish free riders. In making their decisions, individuals attempt to maximize a utility function in which normative values are initially irrelevant but play an increasingly important role if the ability to internalize norms emerges. Using agent-based simulations, we show that norm internalization evolves under a wide range of conditions so that cooperation becomes "instinctive." Norm internalization evolves much more easily and has much larger effects on behavior if groups promote peer punishment of free riders. Promoting only participation in collective actions is not effective. Typically, intermediate levels of norm internalization are most frequent but there are also cases with relatively small frequencies of "oversocialized" individuals willing to make extreme sacrifices for their groups no matter material costs, as well as "undersocialized" individuals completely immune to social norms. Evolving the ability to internalize norms was likely a crucial step on the path to large-scale human cooperation.
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow us to achieve a more accurate structural interpretation Key words: Hybrid Genetic Algorithm, number of generations, model space, local maxima, Number of hill climbing iteration, Minimum eigenvalue, cross-correlation table
NORM management in the oil and gas industry.
Cowie, M; Mously, K; Fageeha, O; Nassar, R
2012-01-01
It has been established that naturally occurring radioactive material (NORM) may accumulate at various locations along the oil and gas production process. Components such as wellheads, separation vessels, pumps, and other processing equipment can become contaminated with NORM, and NORM can accumulate in the form of sludge, scale, scrapings, and other waste media. This can create a potential radiation hazard to workers, the general public, and the environment if certain controls are not established. Saudi Aramco has developed NORM management guidelines, and is implementing a comprehensive strategy to address all aspects of NORM management that aim to enhance NORM monitoring; control of NORM-contaminated equipment; control of NORM waste handling and disposal; and protection, awareness, and training of workers. The benefits of shared knowledge, best practice, and experience across the oil and gas industry are seen as key to the establishment of common guidance. This paper outlines Saudi Aramco's experience in the development of a NORM management strategy, and its goals of establishing common guidance throughout the oil and gas industry. Copyright © 2012. Published by Elsevier Ltd.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Application of Inverse Modeling to Estimate Groundwater Recharge under Future Climate Scenario
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Wang, T.; Bartelt-Hunt, S.; Li, Y.
2016-12-01
Climate variability and change will impose profound influences on groundwater systems. Accurate estimation of groundwater recharge is extremely important for predicting the flow and contaminant transport in the subsurface, which, however, remains as one of the most challenging tasks in the field of hydrology. Using an inverse modeling technique and HYDRUS 1D software, we predicted the spatial distribution of groundwater recharge across the Upper Platte basin in Nebraska, USA, based on 5-year projected future climate and soil moisture data (2057-2060). The climate data was obtained from Weather Research and Forecasting (WRF) model under RCP 8.5 scenario, which was downscaled from global CCSM4 model to a resolution of 24 by 24 km2. Precipitation, potential evapotranspiration, and soil moisture data were extracted from 76 grids located within the Upper Platte basin to perform the inverse modeling. Hargreaves equation was used to calculate the potential evapotranspiration according to latitude, maximum and minimum temperature, and leaf area index (LAI) data at each node. Van-Genuchten parameters were optimized using the inverse algorithm to minimize the error between input and modeled soil moisture data. The groundwater recharge was calculated as the amount of water that passed the lower boundary of the best fitted model. The year of 2057 was used as a spin-up period to minimize the impact of initial conditions. The model was calibrated for years 2058 to 2059 and validation was performed for 2060. This work demonstrates an efficient approach to estimating groundwater recharge based on climate modeling results, which will aid groundwater resources management under future climate scenarios.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
Seasonality of diel cycles of dissolved trace-metal concentrations in a Rocky Mountain stream
Nimick, D.A.; Cleasby, T.E.; McCleskey, R. Blaine
2005-01-01
Substantial diel (24-h) cycles in dissolved (0.1-??m filtration) metal concentrations were observed during summer low flow, winter low flow, and snowmelt runoff in Prickly Pear Creek, Montana. During seven diel sampling episodes lasting 34-61.5 h, dissolved Mn and Zn concentrations increased from afternoon minimum values to maximum values shortly after sunrise. Dissolved As concentrations exhibited the inverse timing. The magnitude of diel concentration increases varied in the range 17-152% for Mn and 70-500% for Zn. Diel increases of As concentrations (17-55%) were less variable. The timing of minimum and maximum values of diel streamflow cycles was inconsistent among sampling episodes and had little relation to the timing of metal concentration cycles, suggesting that geochemical rather than hydrological processes are the primary control of diel metal cycles. Diel cycles of dissolved metal concentrations should be assumed to occur at any time of year in any stream with dissolved metals and neutral to alkaline pH. ?? Springer-Verlag 2005.
Comparative mapping and rapid karyotypic evolution in the genus helianthus.
Burke, John M; Lai, Zhao; Salmaso, Marzia; Nakazato, Takuya; Tang, Shunxue; Heesacker, Adam; Knapp, Steven J; Rieseberg, Loren H
2004-01-01
Comparative genetic linkage maps provide a powerful tool for the study of karyotypic evolution. We constructed a joint SSR/RAPD genetic linkage map of the Helianthus petiolaris genome and used it, along with an integrated SSR genetic linkage map derived from four independent H. annuus mapping populations, to examine the evolution of genome structure between these two annual sunflower species. The results of this work indicate the presence of 27 colinear segments resulting from a minimum of eight translocations and three inversions. These 11 rearrangements are more than previously suspected on the basis of either cytological or genetic map-based analyses. Taken together, these rearrangements required a minimum of 20 chromosomal breakages/fusions. On the basis of estimates of the time since divergence of these two species (750,000-1,000,000 years), this translates into an estimated rate of 5.5-7.3 chromosomal rearrangements per million years of evolution, the highest rate reported for any taxonomic group to date. PMID:15166168
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...
2017-05-18
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
A Relationship Between the Solar Rotation and Activity Analysed by Tracing Sunspot Groups
NASA Astrophysics Data System (ADS)
Ruždjak, Domagoj; Brajša, Roman; Sudar, Davor; Skokić, Ivica; Poljančić Beljan, Ivana
2017-12-01
The sunspot position published in the data bases of the Greenwich Photoheliographic Results (GPR), the US Air Force Solar Optical Observing Network and National Oceanic and Atmospheric Administration (USAF/NOAA), and of the Debrecen Photoheliographic Data (DPD) in the period 1874 to 2016 were used to calculate yearly values of the solar differential-rotation parameters A and B. These differential-rotation parameters were compared with the solar-activity level. We found that the Sun rotates more differentially at the minimum than at the maximum of activity during the epoch 1977 - 2016. An inverse correlation between equatorial rotation and solar activity was found using the recently revised sunspot number. The secular decrease of the equatorial rotation rate that accompanies the increase in activity stopped in the last part of the twentieth century. It was noted that when a significant peak in equatorial rotation velocity is observed during activity minimum, the next maximum is weaker than the previous one.
NASA Astrophysics Data System (ADS)
Kim, Jung Hoon; Hagiwara, Tomomichi
2017-11-01
This paper is concerned with linear time-invariant (LTI) sampled-data systems (by which we mean sampled-data systems with LTI generalised plants and LTI controllers) and studies their H2 norms from the viewpoint of impulse responses and generalised H2 norms from the viewpoint of the induced norms from L2 to L∞. A new definition of the H2 norm of LTI sampled-data systems is first introduced through a sort of intermediate standpoint of those for the existing two definitions. We then establish unified treatment of the three definitions of the H2 norm through a matrix function G(τ) defined on the sampling interval [0, h). This paper next considers the generalised H2 norms, in which two types of the L∞ norm of the output are considered as the temporal supremum magnitude under the spatial 2-norm and ∞-norm of a vector-valued function. We further give unified treatment of the generalised H2 norms through another matrix function F(θ) which is also defined on [0, h). Through a close connection between G(τ) and F(θ), some theoretical relationships between the H2 and generalised H2 norms are provided. Furthermore, appropriate extensions associated with the treatment of G(τ) and F(θ) to the closed interval [0, h] are discussed to facilitate numerical computations and comparisons of the H2 and generalised H2 norms. Through theoretical and numerical studies, it is shown that the two generalised H2 norms coincide with neither of the three H2 norms of LTI sampled-data systems even though all the five definitions coincide with each other when single-output continuous-time LTI systems are considered as a special class of LTI sampled-data systems. To summarise, this paper clarifies that the five control performance measures are mutually related with each other but they are also intrinsically different from each other.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
NASA Astrophysics Data System (ADS)
Guo, Long; Zhang, Xingzhong
2018-03-01
Mechanical and creep properties of Q345c continuous casting slab subjected to uniaxial tensile tests at high temperature were considered in this paper. The minimum creep strain rate and creep rupture life equations whose parameters are calculated by inverse-estimation using the regression analysis were derived based on experimental data. The minimum creep strain rate under constant stress increases with the increase of the temperature from 1000 °C to 1200 °C. A new casting machine curve with the aim of fully using high-temperature creep behaviour is proposed in this paper. The basic arc segment is cancelled in the new curve so that length of the straightening area can be extended and time of creep behaviour can be increased significantly. For the new casting machine curve, the maximum straightening strain rate at the slab surface is less than the minimum creep strain rate. So slab straightening deformation based on the steel creep behaviour at high temperature can be carried out in the process of Q345c steel continuous casting. The effect of creep property at high temperature on slab straightening deformation is positive. It is helpful for the design of new casting machine and improvement of old casting machine.
Wardell, Jeffrey D.; Read, Jennifer P.
2012-01-01
Social learning mechanisms, such as descriptive norms for drinking behavior (norms) and positive alcohol expectancies (PAEs), play a major role in college student alcohol use. According to the principle of reciprocal determinism (Bandura, 1977), norms and PAEs should be reciprocally associated with alcohol use, each influencing one another over time. However, the nature of these prospective relationships for college students is in need of further investigation. This study provided the first examination of the unique reciprocal associations among norms, PAEs, and drinking together in a single model. PAEs become more stable with age, whereas norms are likely to be more dynamic upon college entry. Thus, we hypothesized that alcohol use would show stronger reciprocal associations with norms than with PAEs for college students. Students (N=557; 67% female) completed online measures of PAEs, norms and quantity and frequency of alcohol use in September of their first (T1), second (T2), and third (T3) years of college. Reciprocal associations were analyzed using a cross-lagged panel design. PAEs had unidirectional influences on frequency and quantity of alcohol use, with no prospective effects from alcohol use to PAEs. Reciprocal associations were observed between norms and alcohol use, but only for quantity and not frequency. Specifically, drinking quantity prospectively predicted quantity norms and quantity norms prospectively predicted drinking quantity. This effect was observed across both years in the model. These findings support the reciprocal determinism hypothesis for norms but not for PAEs in college students, and may help to inform norm-based interventions. PMID:23088403
Wardell, Jeffrey D; Read, Jennifer P
2013-03-01
Social learning mechanisms, such as descriptive norms for drinking behavior (norms) and positive alcohol expectancies (PAEs), play a major role in college student alcohol use. According to the principle of reciprocal determinism (Bandura, 1977), norms and PAEs should be reciprocally associated with alcohol use, each influencing one another over time. However, the nature of these prospective relationships for college students is in need of further investigation. This study provided the first examination of the unique reciprocal associations among norms, PAEs, and drinking together in a single model. PAEs become more stable with age, whereas norms are likely to be more dynamic upon college entry. Thus, we hypothesized that alcohol use would show stronger reciprocal associations with norms than with PAEs for college students. Students (N = 557; 67% women) completed online measures of PAEs, norms, and quantity and frequency of alcohol use in September of their first (T1), second (T2), and third (T3) years of college. Reciprocal associations were analyzed using a cross-lagged panel design. PAEs had unidirectional influences on frequency and quantity of alcohol use, with no prospective effects from alcohol use to PAEs. Reciprocal associations were observed between norms and alcohol use, but only for quantity and not for frequency. Specifically, drinking quantity prospectively predicted quantity norms and quantity norms prospectively predicted drinking quantity. This effect was observed across both years in the model. These findings support the reciprocal determinism hypothesis for norms but not for PAEs in college students and may help to inform norm-based interventions. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klawikowski, S; Christian, J; Schott, D
Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less
Regulating Gender Performances: Power and Gender Norms in Faculty Work
ERIC Educational Resources Information Center
Lester, Jaime
2011-01-01
Despite the steady increase of women in faculty positions over the last few decades and the research on gender norms in the academy, what remains unclear is why many female faculty continue to conform to gender norms despite their acknowledgement of the discriminatory nature of these norms, their dissatisfaction performing the norms, and the lack…
ERIC Educational Resources Information Center
Noordhuizen, Suzanne; de Graaf, Paul M.; Sieben, Inge
2011-01-01
This study advances our understanding of fertility norms by examining whether fertility norms remain stable over time. In addition, this article also investigates whether these norms are influenced by (a) sociodemographic background characteristics; (b) fertility norms of close family members: partners, siblings, parents, and children; and (c)…
Time lag and communication in changing unpopular norms.
Gërxhani, Klarita; Bruggeman, Jeroen
2015-01-01
Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm.
Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques
Bergquist, Magnus; Nilsson, Andreas; Hansla, André
2017-01-01
Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants (n = 347) were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1) and higher personal norms for non-targeted pro-environmental behaviors (Study 2). These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal. PMID:29218026
Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques.
Bergquist, Magnus; Nilsson, Andreas; Hansla, André
2017-01-01
Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants ( n = 347) were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1) and higher personal norms for non-targeted pro-environmental behaviors (Study 2). These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal.
Norms as Group-Level Constructs: Investigating School-Level Teen Pregnancy Norms and Behaviors.
Mollborn, Stefanie; Domingue, Benjamin W; Boardman, Jason D
2014-09-01
Social norms are a group-level phenomenon, but past quantitative research has rarely measured them in the aggregate or considered their group-level properties. We used the school-based design of the National Longitudinal Study of Adolescent Health to measure normative climates regarding teen pregnancy across 75 U.S. high schools. We distinguished between the strength of a school's norm against teen pregnancy and the consensus around that norm. School-level norm strength and dissensus were strongly (r = -0.65) and moderately (r = 0.34) associated with pregnancy prevalence within schools, respectively. Normative climate partially accounted for observed racial differences in school pregnancy prevalence, but norms were a stronger predictor than racial composition. As hypothesized, schools with both a stronger average norm against teen pregnancy and greater consensus around the norm had the lowest pregnancy prevalence. Results highlight the importance of group-level normative processes and of considering the local school environment when designing policies to reduce teen pregnancy.
Time Lag and Communication in Changing Unpopular Norms
Gërxhani, Klarita; Bruggeman, Jeroen
2015-01-01
Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm. PMID:25880200
A Review of Norms and Normative Multiagent Systems
Mahmoud, Moamin A.; Ahmad, Mohd Sharifuddin; Mustapha, Aida
2014-01-01
Norms and normative multiagent systems have become the subjects of interest for many researchers. Such interest is caused by the need for agents to exploit the norms in enhancing their performance in a community. The term norm is used to characterize the behaviours of community members. The concept of normative multiagent systems is used to facilitate collaboration and coordination among social groups of agents. Many researches have been conducted on norms that investigate the fundamental concepts, definitions, classification, and types of norms and normative multiagent systems including normative architectures and normative processes. However, very few researches have been found to comprehensively study and analyze the literature in advancing the current state of norms and normative multiagent systems. Consequently, this paper attempts to present the current state of research on norms and normative multiagent systems and propose a norm's life cycle model based on the review of the literature. Subsequently, this paper highlights the significant areas for future work. PMID:25110739
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
Norms as Group-Level Constructs: Investigating School-Level Teen Pregnancy Norms and Behaviors
Mollborn, Stefanie; Domingue, Benjamin W.; Boardman, Jason D.
2015-01-01
Social norms are a group-level phenomenon, but past quantitative research has rarely measured them in the aggregate or considered their group-level properties. We used the school-based design of the National Longitudinal Study of Adolescent Health to measure normative climates regarding teen pregnancy across 75 U.S. high schools. We distinguished between the strength of a school's norm against teen pregnancy and the consensus around that norm. School-level norm strength and dissensus were strongly (r = -0.65) and moderately (r = 0.34) associated with pregnancy prevalence within schools, respectively. Normative climate partially accounted for observed racial differences in school pregnancy prevalence, but norms were a stronger predictor than racial composition. As hypothesized, schools with both a stronger average norm against teen pregnancy and greater consensus around the norm had the lowest pregnancy prevalence. Results highlight the importance of group-level normative processes and of considering the local school environment when designing policies to reduce teen pregnancy. PMID:26074628
NORM Management in the Oil & Gas Industry
NASA Astrophysics Data System (ADS)
Cowie, Michael; Mously, Khalid; Fageeha, Osama; Nassar, Rafat
2008-08-01
It has been established that Naturally Occurring Radioactive Materials (NORM) accumulates at various locations along the oil/gas production process. Components such as wellheads, separation vessels, pumps, and other processing equipment can become NORM contaminated, and NORM can accumulate in sludge and other waste media. Improper handling and disposal of NORM contaminated equipment and waste can create a potential radiation hazard to workers and the environment. Saudi Aramco Environmental Protection Department initiated a program to identify the extent, form and level of NORM contamination associated with the company operations. Once identified the challenge of managing operations which had a NORM hazard was addressed in a manner that gave due consideration to workers and environmental protection as well as operations' efficiency and productivity. The benefits of shared knowledge, practice and experience across the oil & gas industry are seen as key to the establishment of common guidance on NORM management. This paper outlines Saudi Aramco's experience in the development of a NORM management strategy and its goals of establishing common guidance throughout the oil and gas industry.
Cusp anomalous dimension and rotating open strings in AdS/CFT
NASA Astrophysics Data System (ADS)
Espíndola, R.; García, J. Antonio
2018-03-01
In the context of AdS/CFT we provide analytical support for the proposed duality between a Wilson loop with a cusp, the cusp anomalous dimension, and the meson model constructed from a rotating open string with high angular momentum. This duality was previously studied using numerical tools in [1]. Our result implies that the minimum of the profile function of the minimal area surface dual to the Wilson loop, is related to the inverse of the bulk penetration of the dual string that hangs from the quark-anti-quark pair (meson) in the gauge theory.
How preschoolers react to norm violations is associated with culture.
Gampe, Anja; Daum, Moritz M
2018-01-01
Children from the age of 3years understand social norms as such and enforce these norms in interactions with others. Differences in parental and institutional education across cultures make it likely that children receive divergent information about how to act in cases of norm violations. In the current study, we investigated whether cultural values are associated with the ways in which children react to norm violations. We tested 80 bicultural 3-year-olds with a norm enforcement paradigm and analyzed their reactions to norm violations. The reactions were correlated to the children's parental cultural values using the Global Leadership and Organizational Behavior Effectiveness (GLOBE) scales, and these results show that parental culture was associated with children's reactions to norm violations. The three strongest correlations were found for institutional collectivism, performance orientation, and assertiveness. Copyright © 2017 Elsevier Inc. All rights reserved.
Perceived peer drinking norms and responsible drinking in UK university settings.
Robinson, Eric; Jones, Andrew; Christiansen, Paul; Field, Matt
2014-09-01
Heavy drinking is common among students at UK universities. US students overestimate how much their peers drink and correcting this through the use of social norm messages may promote responsible drinking. We tested whether there is an association between perceived campus drinking norms and usual drinking behavior in UK university students and whether norm messages about responsible drinking correct normative misperceptions and increase students' intentions to drink responsibly. 1,020 UK university students took part in an online study. Participants were exposed to one of five message types: a descriptive norm, an injunctive norm, a descriptive and injunctive norm, or one of two control messages. Message credibility was assessed. Afterwards participants completed measures of intentions to drink responsibly and we measured usual drinking habits and perceptions of peer drinking. Perceptions of peer drinking were associated modestly with usual drinking behavior, whereby participants who believed other students drank responsibly also drank responsibly. Norm messages changed normative perceptions, but not in the target population of participants who underestimated responsible drinking in their peers at baseline. Norm messages did not increase intentions to drink responsibly and although based on accurate data, norm messages were not seen as credible. In this UK based study, although perceived social norms about peer drinking were associated with individual differences in drinking habits, campus wide norm messages about responsible drinking did not affect students' intentions to drink more responsibly. More research is required to determine if this approach can be applied to UK settings.
The Role of Perceived Injunctive Alcohol Norms in Adolescent Drinking Behavior
Pedersen, Eric R.; Osilla, Karen Chan; Miles, Jeremy N.V.; Tucker, Joan S.; Ewing, Brett A.; Shih, Regina A.; D’Amico, Elizabeth J.
2016-01-01
Peers have a major influence on youth during adolescence, and perceptions about peer alcohol use (perceived norms) are often associated with personal drinking behavior among youth. Most of the research on perceived norms among adolescents focuses on perceived descriptive norms only, or perceptions about peers’ behavior, and correcting these perceptions are a major focus of many prevention programs with adolescents. In contrast, perceived injunctive norms, which are personal perceptions about peers’ attitudes regarding the acceptability of behaviors, have been minimally examined in the adolescent drinking literature. Yet correcting perceptions about these perceived peer attitudes may be an important component to include in prevention programs with youth. Using a sample of 2,493 high school-aged youth (mean age = 17.3), we assessed drinking behavior (past year use; past month frequency, quantity, and peak drinks), drinking consequences, and perceived descriptive and injunctive norms to examine the relationships of perceived injunctive and descriptive norms on adolescent drinking behavior. Findings indicated that although perceived descriptive norms were associated with some drinking outcomes (past year use; past month frequency; past month quantity; peak drinks), perceived injunctive norms were associated with all drinking outcomes, including outcomes of consequences, even after controlling for perceived descriptive norms. Findings suggest that consideration of perceived injunctive norms may be important in models of adolescent drinking. Prevention programs that do not include injunctive norms feedback may miss an important opportunity to enhance effectiveness of such prevention programs targeting adolescent alcohol use. PMID:27978424
Computer Model Inversion and Uncertainty Quantification in the Geosciences
NASA Astrophysics Data System (ADS)
White, Jeremy T.
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
ERIC Educational Resources Information Center
Parent, Mike C.; Moradi, Bonnie
2011-01-01
The Conformity to Feminine Norms Inventory-45 (CFNI-45; Parent & Moradi, 2010) is an important tool for assessing level of conformity to feminine gender norms and for investigating the implications of such norms for women's functioning. The authors of the present study assessed the factor structure, measurement invariance, reliability, and…
Prosocial norms as a positive youth development construct: a conceptual review.
Siu, Andrew M H; Shek, Daniel T L; Law, Ben
2012-01-01
Prosocial norms like reciprocity, social responsibility, altruism, and volunteerism are ethical standards and beliefs that youth development programs often want to promote. This paper reviews evolutionary, social-cognitive, and developmental theories of prosocial development and analyzes how young people learn and adopt prosocial norms. The paper showed that very few current theories explicitly address the issue of how prosocial norms, in form of feelings of moral obligations, may be challenged by a norm of self-interest and social circumstances when prosocial acts are needed. It is necessary to develop theories which put prosocial norms as a central construct, and a new social cognitive theory of norm activation has the potential to help us understand how prosocial norms may be applied. This paper also highlights how little we know about young people perceiving and receiving prosocial norms and how influential of school policies and peer influence on the prosocial development. Lastly, while training of interpersonal competence (e.g., empathy, moral reasoning, etc.) was commonly used in the youth development, their effectiveness was not systematically evaluated. It will also be interesting to examine how computer and information technology or video games may be used in e-learning of prosocial norms.
NASA Astrophysics Data System (ADS)
Makar, Katie; Fielding-Wells, Jill
2018-03-01
The 3-year study described in this paper aims to create new knowledge about inquiry norms in primary mathematics classrooms. Mathematical inquiry addresses complex problems that contain ambiguities, yet classroom environments often do not adopt norms that promote curiosity, risk-taking and negotiation needed to productively engage with complex problems. Little is known about how teachers and students initiate, develop and maintain norms of mathematical inquiry in primary classrooms. The research question guiding this study is, "How do classroom norms develop that facilitate student learning in primary classrooms which practice mathematical inquiry?" The project will (1) analyse a video archive of inquiry lessons to identify signature practices that enhance productive classroom norms of mathematical inquiry and facilitate learning, (2) engage expert inquiry teachers to collaborate to identify and design strategies for assisting teachers to develop and sustain norms over time that are conducive to mathematical inquiry and (3) support and study teachers new to mathematical inquiry adopting these practices in their classrooms. Anticipated outcomes include identification and illustration of classroom norms of mathematical inquiry, signature practices linked to these norms and case studies of primary teachers' progressive development of classroom norms of mathematical inquiry and how they facilitate learning.
Rimal, Rajiv N
2008-01-01
Informed by the theory of normative social behavior, this article sought to determine the underlying mediating and moderating factors in the relationship between descriptive norms and behavioral intentions. Furthermore, the theory was extended by asking whether and what role behavioral identity played in normative influences. Simulating the central message of norms-based interventions to reduce college students' alcohol consumption, in this field experiment, descriptive norms were manipulated by informing half of the students (n = 665) that their peers consumed less alcohol than they might believe. Others (n = 672) were not provided any norms information. students' injunctive norms, outcome expectations, group identity, behavioral identity, and behavioral intention surrounding alcohol consumption were then measured. Exposure to the low-norms information resulted in a significant drop in estimates of the prevalence of consumption. Injunctive norms and outcome expectations partially mediated and also moderated the relationship between descriptive norms and behavioral intentions. Group identity and behavioral identity also moderated the relationship between descriptive norms and behavioral intentions, but the effect size was relatively small for group identity. Implications for health campaigns are also discussed.
Prosocial Norms as a Positive Youth Development Construct: A Conceptual Review
Siu, Andrew M. H.; Shek, Daniel T. L.; Law, Ben
2012-01-01
Prosocial norms like reciprocity, social responsibility, altruism, and volunteerism are ethical standards and beliefs that youth development programs often want to promote. This paper reviews evolutionary, social-cognitive, and developmental theories of prosocial development and analyzes how young people learn and adopt prosocial norms. The paper showed that very few current theories explicitly address the issue of how prosocial norms, in form of feelings of moral obligations, may be challenged by a norm of self-interest and social circumstances when prosocial acts are needed. It is necessary to develop theories which put prosocial norms as a central construct, and a new social cognitive theory of norm activation has the potential to help us understand how prosocial norms may be applied. This paper also highlights how little we know about young people perceiving and receiving prosocial norms and how influential of school policies and peer influence on the prosocial development. Lastly, while training of interpersonal competence (e.g., empathy, moral reasoning, etc.) was commonly used in the youth development, their effectiveness was not systematically evaluated. It will also be interesting to examine how computer and information technology or video games may be used in e-learning of prosocial norms. PMID:22666157
Not just the norm: exemplar-based models also predict face aftereffects.
Ross, David A; Deroche, Mickael; Palmeri, Thomas J
2014-02-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.
Adaptation and perceptual norms
NASA Astrophysics Data System (ADS)
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
2007-02-01
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects
Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.
2014-01-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282
Social influences among young drivers on talking on the mobile phone while driving.
Riquelme, Hernan E; Al-Sammak, Fawaz Saleh; Rios, Rosa E
2010-04-01
This study set out to measure the influence of injunctive, subjective, verbal, and behavioral norms on talking on a mobile phone while driving. In particular it examines social influences that have been neglected in past research, namely, injunctive norms and explicit verbal and behavioral norms communicated by law enforcers with regard to using a mobile phone when driving. All four types of social norms have rarely been used in studies of this social phenomenon, except for occasional exceptions drawing on Ajzen's theory of planned behavior, which addresses only one: subjective norms. Regression analysis of data collected from young drivers from 217 questionnaires is used to predict the intention of motorists to continue talking on their mobile phones while driving. Selective interaction effects, the purpose of the call, and injunctive and subjective norms were included. The results show that the explicit verbal and behavioral law enforcement norms, the subjective norms, and the interaction of the injunctive norm with the purpose of the call are significant predictors of the unlawful behavior. The results taken together seem to imply that social marketing is likely to encounter difficulty in changing behavior because the subjective norm (what others think I should do) coupled with the lack of enforcement (verbal norms) play important roles in maintaining the unlawful behavior. Moreover, the perception that talking on the mobile phone while driving is acceptable behavior (injunctive norm) in conjunction with the purpose of the call create further challenges to social marketers. The results have implications on policy makers and enforcers. Law enforcers should do their job to prevent the wrong behavior in the first place. In addition, campaigns may be directed to convince the target audience about the false norms and use persuasive communication to emphasize the potential costs of maintaining the unlawful behavior.
How embarrassing! The behavioral and neural correlates of processing social norm violations
van Steenbergen, Henk; Kreuk, Tanja; van der Wee, Nic J. A.; Westenberg, P. Michiel
2017-01-01
Social norms are important for human social interactions, and violations of these norms are evaluated partly on the intention of the actor. Here, we describe the revised Social Norm Processing Task (SNPT-R), a paradigm enabling the study of behavioral and neural responses to intended and unintended social norm violations among both adults and adolescents. We investigated how participants (adolescents and adults, n = 87) rate intentional and unintentional social norm violations with respect to inappropriateness and embarrassment, and we examined the brain activation patterns underlying the processing of these transgressions in an independent sample of 21 adults using functional Magnetic Resonance Imaging (fMRI). We hypothesized to find activation within the medial prefrontal cortex, temporo-parietal cortex and orbitofrontal cortex in response to both intentional and unintentional social norm violations, with more pronounced activation for the intentional social norm violations in these regions and in the amygdala. Participants’ ratings confirmed the hypothesis that the three types of stories are evaluated differently with respect to intentionality: intentional social norm violations were rated as the most inappropriate and most embarrassing. Furthermore, fMRI results showed that reading stories on intentional and unintentional social norm violations evoked activation within the frontal pole, the paracingulate gyrus and the superior frontal gyrus. In addition, processing unintentional social norm violations was associated with activation in, among others, the orbitofrontal cortex, middle frontal gyrus and superior parietal lobule, while reading intentional social norm violations was related to activation in the left amygdala. These regions have been previously implicated in thinking about one’s self, thinking about others and moral reasoning. Together, these findings indicate that the SNPT-R could serve as a useful paradigm for examining social norm processing, both at the behavioral and the neural level. PMID:28441460
Stark, L; Asghar, K; Seff, I; Cislaghi, B; Yu, G; Tesfay Gessesse, T; Eoomkham, J; Assazenew Baysa, A; Falb, K
2018-01-01
Evidence suggests adolescent self-esteem is influenced by beliefs of how individuals in their reference group perceive them. However, few studies examine how gender- and violence-related social norms affect self-esteem among refugee populations. This paper explores relationships between gender inequitable and victim-blaming social norms, personal attitudes, and self-esteem among adolescent girls participating in a life skills program in three Ethiopian refugee camps. Ordinary least squares multivariable regression analysis was used to assess the associations between attitudes and social norms, and self-esteem. Key independent variables of interest included a scale measuring personal attitudes toward gender inequitable norms, a measure of perceived injunctive norms capturing how a girl believed her family and community would react if she was raped, and a peer-group measure of collective descriptive norms surrounding gender inequity. The key outcome variable, self-esteem, was measured using the Rosenberg self-esteem scale. Girl's personal attitudes toward gender inequitable norms were not significantly predictive of self-esteem at endline, when adjusting for other covariates. Collective peer norms surrounding the same gender inequitable statements were significantly predictive of self-esteem at endline ( ß = -0.130; p = 0.024). Additionally, perceived injunctive norms surrounding family and community-based sanctions for victims of forced sex were associated with a decline in self-esteem at endline ( ß = -0.103; p = 0.014). Significant findings for collective descriptive norms and injunctive norms remained when controlling for all three constructs simultaneously. Findings suggest shifting collective norms around gender inequity, particularly at the community and peer levels, may sustainably support the safety and well-being of adolescent girls in refugee settings.
Changing Gender Norms and Marriage Dynamics in the United States.
Pessin, Léa
2018-02-01
Using a regional measure of gender norms from the General Social Surveys together with marital histories from the Panel Study of Income Dynamics, this study explored how gender norms were associated with women's marriage dynamics between 1968 and 2012. Results suggested that a higher prevalence of egalitarian gender norms predicted a decline in marriage formation. This decline was, however, only true for women without a college degree. For college-educated women, the association between gender norms and marriage formation became positive when gender egalitarianism prevailed. The findings also revealed an inverted U-shaped relationship between gender norms and divorce: an initial increase in divorce was observed when gender norms were predominantly traditional. The association, however, reversed as gender norms became egalitarian. No differences by education were found for divorce. The findings partially support the gender revolution framework but also highlight greater barriers to marriage for low-educated women as societies embrace gender equality.
Bundles of Norms About Teen Sex and Pregnancy.
Mollborn, Stefanie; Sennott, Christie
2015-09-01
Teen pregnancy is a cultural battleground in struggles over morality, education, and family. At its heart are norms about teen sex, contraception, pregnancy, and abortion. Analyzing 57 interviews with college students, we found that "bundles" of related norms shaped the messages teens hear. Teens did not think their communities encouraged teen sex or pregnancy, but normative messages differed greatly, with either moral or practical rationalizations. Teens readily identified multiple norms intended to regulate teen sex, contraception, abortion, childbearing, and the sanctioning of teen parents. Beyond influencing teens' behavior, norms shaped teenagers' public portrayals and post hoc justifications of their behavior. Although norm bundles are complex to measure, participants could summarize them succinctly. These bundles and their conflicting behavioral prescriptions create space for human agency in negotiating normative pressures. The norm bundles concept has implications for teen pregnancy prevention policies and can help revitalize social norms for understanding health behaviors. © The Author(s) 2014.
Norm stability at Alcatraz Island: Effects of time and changing conditions
William Valliere; Robert Manning
2010-01-01
Research suggests that visitors often have norms about the resource and social conditions acceptable in a park and that understanding such norms can be useful for park management. Most studies of norms use data from cross-sectional surveys, and little is known about how norms may change over time. To explore this issue, we conducted a study in 2007 to determine whether...
Eisenberg, Marla E.; Toumbourou, John W.; Catalano, Richard F.; Hemphill, Sheryl A.
2014-01-01
Identifying specific aspects of peer social norms that influence adolescent substance use may assist international prevention efforts. This study examines two aggregated measures of social norms in the school setting and their predictive association with substance (alcohol, tobacco and marijuana) use 2 years later in a large cross-national population-based cohort of adolescents. The primary hypothesis is that in Grade 7 both “injunctive” school norms (where students associate substance use with “coolness”) and “descriptive” norms (where student substance use is common) will predict Grade 9 substance use. Data come from the International Youth Development Study, including 2,248 students (51.2 % female) in the US and Australia attending 121 schools in Grade 7. Independent variables included injunctive norms (aggregating measures of school-wide coolness ratings of each substance use) and descriptive norms (aggregating the prevalence of school substance use) in Grade 7. Dependent variables included binge drinking and current use of alcohol, tobacco and marijuana in Grade 9. Associations between each type of school-wide social norm and substance use behaviors in Grade 9 were tested using multilevel logistic regression, adjusting for covariates. In unadjusted models, both injunctive and descriptive norms each significantly predicted subsequent substance use. In fully adjusted models, injunctive norms were no longer significantly associated with Grade 9 use, but descriptive norms remained significantly associated with tobacco and marijuana use in the expected direction. The findings identify descriptive social norms in the school context as a particularly important area to address in adolescent substance use prevention efforts. PMID:24633850
Gu, Xiaosi; Wang, Xingchao; Hula, Andreas; Wang, Shiwei; Xu, Shuai; Lohrenz, Terry M.; Knight, Robert T.; Gao, Zhixian; Dayan, Peter
2015-01-01
Social norms and their enforcement are fundamental to human societies. The ability to detect deviations from norms and to adapt to norms in a changing environment is therefore important to individuals' normal social functioning. Previous neuroimaging studies have highlighted the involvement of the insular and ventromedial prefrontal (vmPFC) cortices in representing norms. However, the necessity and dissociability of their involvement remain unclear. Using model-based computational modeling and neuropsychological lesion approaches, we examined the contributions of the insula and vmPFC to norm adaptation in seven human patients with focal insula lesions and six patients with focal vmPFC lesions, in comparison with forty neurologically intact controls and six brain-damaged controls. There were three computational signals of interest as participants played a fairness game (ultimatum game): sensitivity to the fairness of offers, sensitivity to deviations from expected norms, and the speed at which people adapt to norms. Significant group differences were assessed using bootstrapping methods. Patients with insula lesions displayed abnormally low adaptation speed to norms, yet detected norm violations with greater sensitivity than controls. Patients with vmPFC lesions did not have such abnormalities, but displayed reduced sensitivity to fairness and were more likely to accept the most unfair offers. These findings provide compelling computational and lesion evidence supporting the necessary, yet dissociable roles of the insula and vmPFC in norm adaptation in humans: the insula is critical for learning to adapt when reality deviates from norm expectations, and that the vmPFC is important for valuation of fairness during social exchange. PMID:25589742
Exploring theoretical frameworks for the analysis of fertility fluctuations.
Micheli, G A
1988-05-01
The Easterlin theory, popular during the 1970s, explained population fluctuations in terms of maximization of choice, based on the evaluation of previously acquired information. Fluctuations in procreational patterns were seen as responses to conflict between 2 consecutive generations in which the propensity to procreate is inversely related to cohort size. However, the number of demographic trends not directly explainable by the hypothesis imply that either the model must be extended over a longer time frame or that there has been a drastic change of regime, i.e., a basic change in popular attitudes which determine decision making behavior. 4 strategic principles underlie reproductive decisions: primary adaptation, economic utility, norm internalization, and identity reinforcement. The decision-making process is determined by the relative importance of these 4 principles. Primary adaptation implies inertia, i.e., nondecision. Economic utility implies the use of rational choice to maximize economic gain. Norm internalization implies conforming to the behavior of one's sociocultural peers as if it were one's own choice. Identity reinforcement implies that one decides to reproduce because procreation is a way of extending one's identity forward in time. The 2 active decision-making attitudes, economic rationality and identity reinforcement, are strategically both antagonistic and complementary. This polarity of behavior lends itself to analysis in terms of the predator-prey model, in which population is seen as the predator and resources as the prey. However, in applying the model, one must keep in mind that the real demographic picture is not static and that it is subject to deformation by external forces.
From Norm Adoption to Norm Internalization
NASA Astrophysics Data System (ADS)
Conte, Rosaria; Andrighetto, Giulia; Villatoro, Daniel
In this presentation, advances in modeling the mental dynamics of norms will be presented. In particular, the process from norm-adoption, possibly yielding new normative goals, to different forms of norm compliance will be focused upon, including norm internalization, which is at study in social-behavioral sciences and moral philosophy since long. Of late, the debate was revamped within the rationality approach pointing to the role of norm internalization as a less costly and more reliable enforcement system than social control. So far, poor attention was paid to the mental underpinnings of internalization. In this presentation, a rich cognitive model of different types, degrees and factors of internalization is shown. The initial implementation of this model on EMIL-A, a normative agent architecture developed and applied to the.
Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.
Li, Yuanqing; Amari, Shun-Ichi
2010-07-01
In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
Global moment tensor computation at GFZ Potsdam
NASA Astrophysics Data System (ADS)
Saul, J.; Becker, J.; Hanka, W.
2011-12-01
As part of its earthquake information service, GFZ Potsdam has started to provide seismic moment tensor solutions for significant earthquakes world-wide. The software used to compute the moment tensors is a GFZ-Potsdam in-house development, which uses the framework of the software SeisComP 3 (Hanka et al., 2010). SeisComP 3 (SC3) is a software package for seismological data acquisition, archival, quality control and analysis. SC3 is developed by GFZ Potsdam with significant contributions from its user community. The moment tensor inversion technique uses a combination of several wave types, time windows and frequency bands depending on magnitude and station distance. Wave types include body, surface and mantle waves as well as the so-called 'W-Phase' (Kanamori and Rivera, 2008). The inversion is currently performed in the time domain only. An iterative centroid search can be performed independently both horizontally and in depth. Moment tensors are currently computed in a semi-automatic fashion. This involves inversions that are performed automatically in near-real time, followed by analyst review prior to publication. The automatic results are quite often good enough to be published without further improvements, sometimes in less than 30 minutes from origin time. In those cases where a manual interaction is still required, the automatic inversion usually does a good job at pre-selecting those traces that are the most relevant for the inversion, keeping the work required for the analyst at a minimum. Our published moment tensors are generally in good agreement with those published by the Global Centroid-Moment-Tensor (GCMT) project for earthquakes above a magnitude of about Mw 5. Additionally we provide solutions for smaller earthquakes above about Mw 4 in Europe, which are normally not analyzed by the GCMT project. We find that for earthquakes above Mw 6, the most robust automatic inversions can usually be obtained using the W-Phase time window. The GFZ earthquake bulletin is located at http://geofon.gfz-potsdam.de/eqinfo For more information on the SeisComP 3 software visit http://www.seiscomp3.org
Warthin tumor of the parotid gland: diagnostic value of MR imaging with histopathologic correlation.
Ikeda, Mitsuaki; Motoori, Ken; Hanazawa, Toyoyuki; Nagai, Yuichiro; Yamamoto, Seiji; Ueda, Takuya; Funatsu, Hiroyuki; Ito, Hisao
2004-08-01
The purpose of our study was to describe the MR imaging appearance of Warthin tumors multiple MR imaging techniques and to interpret the difference in appearance from that of malignant parotid tumors. T1-weighted, T2-weighted, short inversion time inversion recovery, diffusion-weighted, and contrast-enhanced dynamic MR images of 19 Warthin tumors and 17 malignant parotid tumors were reviewed. MR imaging results were compared with those of pathologic analysis. Epithelial stromata and lymphoid tissue with slitlike small cysts in Warthin tumors showed early enhancement and a high washout rate (> or =30%) on dynamic contrast-enhanced images, and accumulations of complicated cysts showed early enhancement and a low washout ratio (< 30%). The areas containing complicated cysts showed high signal intensity on T1-weighted images, whereas some foci in those areas showed low signal intensity on short tau inversion recovery images. The mean minimum signal intensity ratios (SIRmin) of Warthin tumor on short tau inversion recovery (0.29 +/- 0.22 SD) (P < .01) and T2-weighted images (0.28 +/- 0.09) (P < .05) were significantly lower than those of malignant parotid tumors (0.53 +/- 0.19, 0.48 +/- 0.19). The average washout ratio of Warthin tumors (44.0 +/- 20.4%) was higher than that of malignant parotid tumors (11.9 +/- 11.6%). The mean apparent diffusion coefficient of Warthin tumors (0.96 +/- 0.13 x 10(-3)mm2/s) was significantly lower (P < .01) than that of malignant tumors (1.19 +/- 0.19 x 10(-3)mm2/s). Detecting hypointense areas of short tau inversion recovery and T2-weighted images or low apparent diffusion coefficient values on diffusion-weighted images was useful for predicting whether salivary gland tumors were Warthin tumors. The findings of the dynamic contrast-enhanced study also were useful.
NASA Astrophysics Data System (ADS)
López-Comino, José Ángel; Stich, Daniel; Ferreira, Ana M. G.; Morales, Jose
2015-09-01
Inversions for the full slip distribution of earthquakes provide detailed models of earthquake sources, but stability and non-uniqueness of the inversions is a major concern. The problem is underdetermined in any realistic setting, and significantly different slip distributions may translate to fairly similar seismograms. In such circumstances, inverting for a single best model may become overly dependent on the details of the procedure. Instead, we propose to perform extended fault inversion trough falsification. We generate a representative set of heterogeneous slipmaps, compute their forward predictions, and falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity such as, for example, different patterns of main slip patches. For a feasibility study, we use teleseismic body wave recordings from the 2012 September 5 Nicoya, Costa Rica earthquake, although the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. We generate 10 000 pseudo-random, heterogeneous slip distributions assuming a von Karman autocorrelation function, keeping the rake angle, rupture velocity and slip velocity function fixed. The slip distribution of the 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. Two hundred fifty-two slip models with normalized L1-fit within 5 per cent from the global minimum from our solution set. They consistently show a single dominant slip patch around the hypocentre. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2-3.5 m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns such as Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.
Improvement of electrical resistivity tomography for leachate injection monitoring.
Clément, R; Descloitres, M; Günther, T; Oxarango, L; Morra, C; Laurent, J-P; Gourc, J-P
2010-03-01
Leachate recirculation is a key process in the scope of operating municipal waste landfills as bioreactors, which aims to increase the moisture content to optimize the biodegradation in landfills. Given that liquid flows exhibit a complex behaviour in very heterogeneous porous media, in situ monitoring methods are required. Surface time-lapse electrical resistivity tomography (ERT) is usually proposed. Using numerical modelling with typical 2D and 3D injection plume patterns and 2D and 3D inversion codes, we show that wrong changes of resistivity can be calculated at depth if standard parameters are used for time-lapse ERT inversion. Major artefacts typically exhibit significant increases of resistivity (more than +30%) which can be misinterpreted as gas migration within the waste. In order to eliminate these artefacts, we tested an advanced time-lapse ERT procedure that includes (i) two advanced inversion tools and (ii) two alternative array geometries. The first advanced tool uses invariant regions in the model. The second advanced tool uses an inversion with a "minimum length" constraint. The alternative arrays focus on (i) a pole-dipole array (2D case), and (ii) a star array (3D case). The results show that these two advanced inversion tools and the two alternative arrays remove almost completely the artefacts within +/-5% both for 2D and 3D situations. As a field application, time-lapse ERT is applied using the star array during a 3D leachate injection in a non-hazardous municipal waste landfill. To evaluate the robustness of the two advanced tools, a synthetic model including both true decrease and increase of resistivity is built. The advanced time-lapse ERT procedure eliminates unwanted artefacts, while keeping a satisfactory image of true resistivity variations. This study demonstrates that significant and robust improvements can be obtained for time-lapse ERT monitoring of leachate recirculation in waste landfills. Copyright 2009 Elsevier Ltd. All rights reserved.
Improvement of electrical resistivity tomography for leachate injection monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, R., E-mail: remi.clement@hmg.inpg.f; Descloitres, M.; Guenther, T., E-mail: Thomas.Guenther@liag-hannover.d
2010-03-15
Leachate recirculation is a key process in the scope of operating municipal waste landfills as bioreactors, which aims to increase the moisture content to optimize the biodegradation in landfills. Given that liquid flows exhibit a complex behaviour in very heterogeneous porous media, in situ monitoring methods are required. Surface time-lapse electrical resistivity tomography (ERT) is usually proposed. Using numerical modelling with typical 2D and 3D injection plume patterns and 2D and 3D inversion codes, we show that wrong changes of resistivity can be calculated at depth if standard parameters are used for time-lapse ERT inversion. Major artefacts typically exhibit significantmore » increases of resistivity (more than +30%) which can be misinterpreted as gas migration within the waste. In order to eliminate these artefacts, we tested an advanced time-lapse ERT procedure that includes (i) two advanced inversion tools and (ii) two alternative array geometries. The first advanced tool uses invariant regions in the model. The second advanced tool uses an inversion with a 'minimum length' constraint. The alternative arrays focus on (i) a pole-dipole array (2D case), and (ii) a star array (3D case). The results show that these two advanced inversion tools and the two alternative arrays remove almost completely the artefacts within +/-5% both for 2D and 3D situations. As a field application, time-lapse ERT is applied using the star array during a 3D leachate injection in a non-hazardous municipal waste landfill. To evaluate the robustness of the two advanced tools, a synthetic model including both true decrease and increase of resistivity is built. The advanced time-lapse ERT procedure eliminates unwanted artefacts, while keeping a satisfactory image of true resistivity variations. This study demonstrates that significant and robust improvements can be obtained for time-lapse ERT monitoring of leachate recirculation in waste landfills.« less
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
ERIC Educational Resources Information Center
Arpan, Laura M.; Barooah, Prabir; Subramany, Rahul
2015-01-01
This study examined building occupants' responses associated with an occupant-based energy-efficiency pilot in a university building. The influence of occupants' values and norms as well as effects of two educational message frames (descriptive vs. moral norms cues) on program support were tested. Occupants' personal moral norm to conserve energy…
MODULATION OF GALACTIC COSMIC RAYS OBSERVED AT L1 IN SOLAR CYCLE 23
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fludra, A., E-mail: Andrzej.Fludra@stfc.ac.uk
2015-01-20
We analyze a unique 15 yr record of galactic cosmic-ray (GCR) measurements made by the SOHO Coronal Diagnostic Spectrometer NIS detectors, recording integrated GCR numbers with energies above 1.0 GeV between 1996 July and 2011 June. We are able to closely reproduce the main features of the SOHO/CDS GCR record using the modulation potential calculated from neutron monitor data by Usoskin et al. The GCR numbers show a clear solar cycle modulation: they decrease by 50% from the 1997 minimum to the 2000 maximum of the solar cycle, then return to the 1997 level in 2007 and continue to rise, in 2009 Decembermore » reaching a level 25% higher than in 1997. This 25% increase is in contrast with the behavior of Ulysses/KET GCR protons extrapolated to 1 AU in the ecliptic plane, showing the same level in 2008-2009 as in 1997. The GCR numbers are inversely correlated with the tilt angle of the heliospheric current sheet. In particular, the continued increase of SOHO/CDS GCRs from 2007 until 2009 is correlated with the decrease of the minimum tilt angle from 30° in mid-2008 to 5° in late 2009. The GCR level then drops sharply from 2010 January, again consistent with a rapid increase of the tilt angle to over 35°. This shows that the extended 2008 solar minimum was different from the 1997 minimum in terms of the structure of the heliospheric current sheet.« less
Clarifying the contribution of subjective norm to predicting leisure-time exercise.
Okun, Morris A; Karoly, Paul; Lutz, Rafer
2002-01-01
To clarify the contribution of subjective norm to exercise intention and behavior by considering the influence of descriptive as well as injunctive social norms related to family and friends. A sample of 530 college students completed a questionnaire that assessed descriptive and injunctive social norms related to family and to friends, perceived behavioral control, attitude, intention, and leisure-time exercise. Friend descriptive social norm was a significant predictor of both intention (p<.05) and leisure-time exercise (p<.001). Descriptive norms should be incorporated into tests of the theory of planned behavior in the exercise domain.
NASA Astrophysics Data System (ADS)
Shaikh, M. M.; Notarpietro, R.; Yin, P.; Nava, B.
2013-12-01
The Multi-Instrument Data Analysis System (MIDAS) algorithm is based on the oceanographic imaging techniques first applied to do the imaging of 2D slices of the ionosphere. The first version of MIDAS (version 1.0) was able to deal with any line-integral data such as GPS-ground or GPS-LEO differential-phase data or inverted ionograms. The current version extends tomography into four dimensional (lat, long, height and time) spatial-temporal mapping that combines all observations simultaneously in a single inversion with the minimum of a priori assumptions about the form of the ionospheric electron-concentration distribution. This work is an attempt to investigate the Radio Occultation (RO) data assimilation into MIDAS by assessing the ionospheric asymmetry and its impact on RO data inversion, when the Onion-peeling algorithm is used. Ionospheric RO data from COSMIC mission, specifically data collected during 24 September 2011 storm over mid-latitudes, has been used for the data assimilation. Using output electron density data from Midas (with/without RO assimilation) and ideal RO geometries, we tried to assess ionospheric asymmetry. It has been observed that the level of asymmetry was significantly increased when the storm was active. This was due to the increased ionization, which in turn produced large gradients along occulted ray path in the ionosphere. The presence of larger gradients was better observed when Midas was used with RO assimilated data. A very good correlation has been found between the evaluated asymmetry and errors related to the inversion products, when the inversion is performed considering standard techniques based on the assumption of spherical symmetry of the ionosphere. Errors are evaluated considering the peak electron density (NmF2) estimate and the Vertical TEC (VTEC) evaluation. This work highlights the importance of having a tool which should be able to state the effectiveness of Radio Occultation data inversion considering standard algorithms, like Onion-peeling, which are based on ionospheric spherical symmetry assumption. The outcome of this work will lead to find a better inversion algorithm which will deal with the ionospheric asymmetry in more realistic way. This is foreseen as a task for future research. This work has been done under the framework of TRANSMIT project (ITN Marie Curie Actions - GA No. 264476).
White, Katherine M; Smith, Joanne R; Terry, Deborah J; Greenslade, Jaimi H; McKimmie, Blake M
2009-03-01
The present research investigated three approaches to the role of norms in the theory of planned behaviour (TPB). Two studies examined the proposed predictors of intentions to engage in household recycling (Studies 1 and 2) and reported recycling behaviour (Study 1). Study 1 tested the impact of descriptive and injunctive norms (personal and social) and the moderating role of self-monitoring on norm-intention relations. Study 2 examined the role of group norms and group identification and the moderating role of collective self on norm-intention relations. Both studies demonstrated support for the TPB and the inclusion of additional normative variables: attitudes; perceived behavioural control; descriptive; and personal injunctive norms (but not social injunctive norm) emerged as significant independent predictors of intentions. There was no evidence that the impact of norms on intentions varied as a function of the dispositional variables of self-monitoring (Study 1) or the collective self (Study 2). There was support, however, for the social identity approach to attitude-behaviour relations in that group norms predicted recycling intentions, particularly for individuals who identified strongly with the group. The results of these two studies highlight the critical role of social influence processes within the TPB and the attitude-behaviour context.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
Global bioethics at UNESCO: in defence of the Universal Declaration on Bioethics and Human Rights.
Andorno, R
2007-03-01
The Universal Declaration on Bioethics and Human Rights adopted by the United Nations Educational, Scientific, and Cultural Organisation (UNESCO) on 19 October 2005 is an important step in the search for global minimum standards in biomedical research and clinical practice. As a member of UNESCO International Bioethics Committee, I participated in the drafting of this document. Drawing on this experience, the principal features of the Declaration are outlined, before responding to two general charges that have been levelled at UNESCO's bioethical activities and at this particular document, are outlined. One criticism is to the effect that UNESCO is exceeding its mandate by drafting such bioethical instruments--in particular, the charge is that it is trespassing on a topic that lies in the responsibility of the World Health Organization. The second criticism is that UNESCO's reliance on international human rights norms is inappropriate.
MNE software for processing MEG and EEG data
Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808
Deprivation selectively modulates brain potentials to food pictures.
Stockburger, Jessica; Weike, Almut I; Hamm, Alfons O; Schupp, Harald T
2008-08-01
Event-related brain potentials (ERPs) were used to examine whether the processing of food pictures is selectively modulated by changes in the motivational state of the observer. Sixteen healthy male volunteers were tested twice 1 week apart, either after 24 hr of food deprivation or after normal food intake. ERPs were measured while participants viewed appetitive food pictures as well as standard emotional and neutral control pictures. Results show that the ERPs to food pictures in a hungry, rather than satiated, state were associated with enlarged positive potentials over posterior sensor sites in a time window of 170-310 ms poststimulus. Minimum-norm analysis suggests the enhanced processing of food cues primarily in occipito-temporo-parietal regions. In contrast, processing of standard emotional and neutral pictures was not modulated by food deprivation. Considered from the perspective of motivated attention, the selective change of food cue processing may reflect a state-dependent change in stimulus salience.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Perkins, Jessica M.; Perkins, H. W.; Craig, David W.
2014-01-01
Previous research has revealed pervasive misperceptions of peer norms for a variety of behaviors among adolescents such as alcohol use, smoking, and bullying, and that these misperceptions are predictors of personal behavior. Similarly, misperception of peer weight norms may be a pervasive and important risk factor for adolescent weight status. Thus, the comparative association of actual and perceived peer weight norms is examined in relation to personal weight status. Secondary school students in 40 middle and high schools (n=40,328) were surveyed about their perceptions of the peer weight norm for same gender and grade within their school. Perceived norms were compared to aggregate self-reports of weight for these same groups. Overestimation of peer weight norms by more than 5% occurred among 26% of males and 20% of females (by 22 and 16 pounds on average, respectively). Underestimation occurred among 38% of males as well as females (by 16 and 13 pounds on average, respectively). Personal overweight status based on body mass index (BMI) was much more prevalent among respondents who overestimated peer weight norms as was personal underweight status among respondents who underestimated norms. Perception of the peer norm was the strongest predictor of personal BMI among all personal and school variables examined for both male and female students. Thus, reducing misperceived weight norms should be given more attention as a potential avenue for preventing obesity and eating disorders. PMID:24488532
Mu, Yan; Kitayama, Shinobu; Han, Shihui; Gelfand, Michele J
2015-12-15
Humans are unique among all species in their ability to develop and enforce social norms, but there is wide variation in the strength of social norms across human societies. Despite this fundamental aspect of human nature, there has been surprisingly little research on how social norm violations are detected at the neurobiological level. Building on the emerging field of cultural neuroscience, we combine noninvasive electroencephalography (EEG) with a new social norm violation paradigm to examine the neural mechanisms underlying the detection of norm violations and how they vary across cultures. EEG recordings from Chinese and US participants (n = 50) showed consistent negative deflection of event-related potential around 400 ms (N400) over the central and parietal regions that served as a culture-general neural marker of detecting norm violations. The N400 at the frontal and temporal regions, however, was only observed among Chinese but not US participants, illustrating culture-specific neural substrates of the detection of norm violations. Further, the frontal N400 predicted a variety of behavioral and attitudinal measurements related to the strength of social norms that have been found at the national and state levels, including higher culture superiority and self-control but lower creativity. There were no cultural differences in the N400 induced by semantic violation, suggesting a unique cultural influence on social norm violation detection. In all, these findings provided the first evidence, to our knowledge, for the neurobiological foundations of social norm violation detection and its variation across cultures.
Ecker, Anthony H.; Buckner, Julia D.
2014-01-01
Objective: Individuals with greater social anxiety are particularly vulnerable to cannabis-related impairment. Descriptive norms (beliefs about others’ use) and injunctive norms (beliefs regarding others’ approval of risky use) may be particularly relevant to cannabis-related behaviors among socially anxious persons if they use cannabis for fear of evaluation for deviating from what they believe to be normative behaviors. Yet, little research has examined the impact of these social norms on the relationships between social anxiety and cannabis use behaviors. Method: The current study investigated whether the relationships of social anxiety to cannabis use and use-related problems varied as a function of social norms. The sample comprised 230 (63.0% female) current cannabis-using undergraduates. Results: Injunctive norms (regarding parents, not friends) moderated the relationship between social anxiety and cannabis-related problem severity. Post hoc probing indicated that among participants with higher (but not lower) social anxiety, those with greater norm endorsement reported the most severe impairment. Injunctive norms (parents) also moderated the relationship between social anxiety and cannabis use frequency such that those with higher social anxiety and lower norm endorsement used cannabis less frequently. Descriptive norms did not moderate the relationship between social anxiety and cannabis use frequency. Conclusions: Socially anxious cannabis users appear to be especially influenced by beliefs regarding parents’ approval of risky cannabis use. Results underscore the importance of considering reference groups and the specific types of norms in understanding factors related to cannabis use behaviors among this vulnerable population. PMID:24411799
Babb, James; Xia, Ding; Chang, Gregory; Krasnokutsky, Svetlana; Abramson, Steven B.; Jerschow, Alexej; Regatte, Ravinder R.
2013-01-01
Purpose: To assess the potential use of sodium magnetic resonance (MR) imaging of cartilage, with and without fluid suppression by using an adiabatic pulse, for classifying subjects with versus subjects without osteoarthritis at 7.0 T. Materials and Methods: The study was approved by the institutional review board and was compliant with HIPAA. The knee cartilage of 19 asymptomatic (control subjects) and 28 symptomatic (osteoarthritis patients) subjects underwent 7.0-T sodium MR imaging with use of two different sequences: one without fluid suppression (radial three-dimensional sequence) and one with fluid suppression (inversion recovery [IR] wideband uniform rate and smooth truncation [WURST]). Fluid suppression was obtained by using IR with an adiabatic inversion pulse (WURST pulse). Mean sodium concentrations and their standard deviations were measured in the patellar, femorotibial medial, and lateral cartilage regions over four consecutive sections for each subject. The minimum, maximum, median, and average means and standard deviations were calculated over all measurements for each subject. The utility of these measures in the detection of osteoarthritis was evaluated by using logistic regression and the area under the receiver operating characteristic curve (AUC). Bonferroni correction was applied to the P values obtained with logistic regression. Results: Measurements from IR WURST were found to be significant predicators of all osteoarthritis (Kellgren-Lawrence score of 1–4) and early osteoarthritis (Kellgren-Lawrence score of 1 or 2). The minimum standard deviation provided the highest AUC (0.83) with the highest accuracy (>78%), sensitivity (>82%), and specificity (>74%) for both all osteoarthritis and early osteoarthritis groups. Conclusion: Quantitative sodium MR imaging at 7.0 T with fluid suppression by using adiabatic IR is a potential biomarker for osteoarthritis. © RSNA, 2013 PMID:23468572
NASA Astrophysics Data System (ADS)
Sabino, Fernando P.; Oliveira, Luiz N.; Wei, Su-Huai; Da Silva, Juarez L. F.
2018-02-01
Transparent conducting oxides such as the bixbyite In2O3 and rutile SnO2 systems have large disparities between the optical and fundamental bandgaps, ΔEgO F , because selection rules forbid dipolar transitions from the top of the valence band to the conduction-band minimum; however, the optical gaps of multi-cation compounds with the same chemical species often coincide with their fundamental gaps. To explain this conundrum, we have employed density-functional theory to compute the optical properties of multi-cation compounds, In2ZnO4 and In4Sn3O12, in several crystal structures. We show that a recently proposed mechanism to explain the disparity between the optical and fundamental gaps of M2O3 (M = Al, Ga, and In) applies also to other binary systems and to multi-compounds. Namely, a gap disparity will arise if the following three conditions are satisfied: (i) the crystal structure has inversion symmetry; (ii) the conduction-band minimum is formed by the cation and O s-orbitals; and (iii) there is strong p-d coupling and weak p-p in the vicinity of the valence-band maximum. The third property depends critically on the cationic chemical species. In the structures with inversion symmetry, Zn (Sn) strengthens (weakens) the p-d coupling in In2ZnO4 (In4Sn3O12), enhancing (reducing) the gap disparity. Furthermore, we have also identified a In4Sn3O12 structure that is 31.80 meV per formula unit more stable than a recently proposed alternative model.
Jimmy's baby doll and Jenny's truck: young children's reasoning about gender norms.
Conry-Murray, Clare; Turiel, Elliot
2012-01-01
To assess the flexibility of reasoning about gender, children ages 4, 6, and 8 years (N = 72) were interviewed about gender norms when different domains were highlighted. The majority of participants at all ages judged a reversal of gender norms in a different cultural context to be acceptable. They also judged gender norms as a matter of personal choice and they negatively evaluated a rule enforcing gender norms in schools. Older children were more likely to show flexibility than younger children. Justifications obtained from 6- and 8-year-olds showed that they considered adherence to gender norms a matter of personal choice and they viewed the rule enforcing gender norms as unfair. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
An algorithm for solving the system-level problem in multilevel optimization
NASA Technical Reports Server (NTRS)
Balling, R. J.; Sobieszczanski-Sobieski, J.
1994-01-01
A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.
How do social norms influence prosocial development?
House, Bailey R
2018-04-01
Humans are both highly prosocial and extremely sensitive to social norms, and some theories suggest that norms are necessary to account for uniquely human forms of prosocial behavior and cooperation. Understanding how norms influence prosocial behavior is thus essential if we are to describe the psychology and development of prosocial behavior. In this article I review recent research from across the social sciences that provides (1) a theoretical model of how norms influence prosocial behavior, (2) empirical support for the model based on studies with adults and children, and (3) predictions about the psychological mechanisms through which norms shape prosocial behavior. I conclude by discussing the need for future studies into how prosocial behavior develops through emerging interactions between culturally varying norms, social cognition, emotions, and potentially genes. Copyright © 2017 Elsevier Ltd. All rights reserved.