Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Sparse EEG/MEG source estimation via a group lasso
Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor
2017-01-01
Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J
2004-03-01
Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.
ERIC Educational Resources Information Center
Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec
2011-01-01
We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Determining genetic erosion in fourteen Picea chihuahuana Martínez populations.
C.Z. Quiñones-Pérez; C. Wehenkel
2017-01-01
Picea chihuahuana is an endemic species in Mexico and is considered endangered, according to the Mexican Official Norm (NOM-ECOL-059-2010). This species covers a total area of no more than 300 ha located in at least 40 sites along the Sierra Madre Occidental in Durango and Chihuahua states. A minimum of 42,600 individuals has been estimated,...
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
The roles of outlet density and norms in alcohol use disorder.
Ahern, Jennifer; Balzer, Laura; Galea, Sandro
2015-06-01
Alcohol outlet density and norms shape alcohol consumption. However, due to analytic challenges we do not know: (a) if alcohol outlet density and norms also shape alcohol use disorder, and (b) whether they act in combination to shape disorder. We applied a new targeted minimum loss-based estimator for rare outcomes (rTMLE) to a general population sample from New York City (N = 4000) to examine the separate and combined relations of neighborhood alcohol outlet density and norms around drunkenness with alcohol use disorder. Alcohol use disorder was assessed using the World Mental Health Comprehensive International Diagnostic Interview (WMH-CIDI) alcohol module. Confounders included demographic and socioeconomic characteristics, as well as history of drinking prior to residence in the current neighborhood. Alcohol use disorder prevalence was 1.78%. We found a marginal risk difference for alcohol outlet density of 0.88% (95% CI 0.00-1.77%), and for norms of 2.05% (95% CI 0.89-3.21%), adjusted for confounders. While each exposure had a substantial relation with alcohol use disorder, there was no evidence of additive interaction between the exposures. Results indicate that the neighborhood environment shapes alcohol use disorder. Despite the lack of additive interaction, each exposure had a substantial relation with alcohol use disorder and our findings suggest that alteration of outlet density and norms together would likely be more effective than either one alone. Important next steps include development and testing of multi-component intervention approaches aiming to modify alcohol outlet density and norms toward reducing alcohol use disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
ERIC Educational Resources Information Center
Garcia-Quintana, Roan A.; Mappus, M. Lynne
1980-01-01
Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
MNE software for processing MEG and EEG data
Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.
2013-01-01
Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Cheng, Su-Fen; Lee-Hsieh, Jane; Turton, Michael A; Lin, Kuan-Chia
2014-06-01
Little research has investigated the establishment of norms for nursing students' self-directed learning (SDL) ability, recognized as an important capability for professional nurses. An item response theory (IRT) approach was used to establish norms for SDL abilities valid for the different nursing programs in Taiwan. The purposes of this study were (a) to use IRT with a graded response model to reexamine the SDL instrument, or the SDLI, originally developed by this research team using confirmatory factor analysis and (b) to establish SDL ability norms for the four different nursing education programs in Taiwan. Stratified random sampling with probability proportional to size was used. A minimum of 15% of students from the four different nursing education degree programs across Taiwan was selected. A total of 7,879 nursing students from 13 schools were recruited. The research instrument was the 20-item SDLI developed by Cheng, Kuo, Lin, and Lee-Hsieh (2010). IRT with the graded response model was used with a two-parameter logistic model (discrimination and difficulty) for the data analysis, calculated using MULTILOG. Norms were established using percentile rank. Analysis of item information and test information functions revealed that 18 items exhibited very high discrimination and two items had high discrimination. The test information function was higher in this range of scores, indicating greater precision in the estimate of nursing student SDL. Reliability fell between .80 and .94 for each domain and the SDLI as a whole. The total information function shows that the SDLI is appropriate for all nursing students, except for the top 2.5%. SDL ability norms were established for each nursing education program and for the nation as a whole. IRT is shown to be a potent and useful methodology for scale evaluation. The norms for SDL established in this research will provide practical standards for nursing educators and students in Taiwan.
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
Automated ambiguity estimation for VLBI Intensive sessions using L1-norm
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.
Neighbors, Clayton; Lindgren, Kristen P.; Knee, C. Raymond; Fossos, Nicole; DiBello, Angelo
2011-01-01
Social norms theories hold that perceptions of the degree of approval for a behavior have a strong influence on one’s private attitudes and public behavior. In particular, being more approving of drinking and perceiving peers as more approving of drinking, are strongly associated with one’s own drinking. However, previous research has not considered that students may vary considerably in the confidence in their estimates of peer approval and in the confidence in their estimates of their own approval of drinking. The present research was designed to evaluate confidence as a moderator of associations among perceived injunctive norms, own attitudes, and drinking. We expected perceived injunctive norms and own attitudes would be more strongly associated with drinking among students who felt more confident in their estimates of peer approval and own attitudes. We were also interested in whether this might differ by gender. Injunctive norms and self-reported alcohol consumption were measured in a sample of 708 college students. Findings from negative binomial regression analyses supported moderation hypotheses for confidence and perceived injunction norms but not for personal attitudes. Thus, perceived injunctive norms were more strongly associated with own drinking among students who felt more confident in their estimates of friends’ approval of drinking. A three-way interaction further revealed that this was primarily true among women. Implications for norms and peer influence theories as well as interventions are discussed. PMID:21928864
ESTIMATION OF EXPOSURE DOSES FOR THE SAFE MANAGEMENT OF NORM WASTE DISPOSAL.
Jeong, Jongtae; Ko, Nak Yul; Cho, Dong-Keun; Baik, Min Hoon; Yoon, Ki-Hoon
2018-03-16
Naturally occurring radioactive materials (NORM) wastes with different radiological characteristics are generated in several industries. The appropriate options for NORM waste management including disposal options should be discussed and established based on the act and regulation guidelines. Several studies calculated the exposure dose and mass of NORM waste to be disposed in landfill site by considering the activity concentration level and exposure dose. In 2012, the Korean government promulgated an act on the safety control of NORM around living environments to protect human health and the environment. For the successful implementation of this act, we suggest a reference design for a landfill for the disposal of NORM waste. Based on this reference landfill, we estimate the maximum exposure doses and the relative impact of each pathway to exposure dose for three scenarios: a reference scenario, an ingestion pathway exclusion scenario, and a low leach rate scenario. Also, we estimate the possible quantity of NORM waste disposal into a landfill as a function of the activity concentration level of U series, Th series and 40K and two kinds of exposure dose levels, 1 and 0.3 mSv/y. The results of this study can be used to support the establishment of technical bases of the management strategy for the safe disposal of NORM waste.
Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M
2016-08-01
One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.
Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)
2001-01-01
A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
Validity of Other-Gender-Normed Scales on the Kuder Occupational Interest Survey.
ERIC Educational Resources Information Center
Zytowski, Donald G.; Laing, Joan
1978-01-01
Investigated the relationship between KOIS twin scales normed separately on males and females for occupations and college majors. Rankings on own- and other-gender-normed scales correlated highly. The scales were approximately equal in predictive validity. Rankings on other-gender-normed scales provided an accurate estimate of expected rankings on…
Anchoring and Estimation of Alcohol Consumption: Implications for Social Norm Interventions
ERIC Educational Resources Information Center
Lombardi, Megan M.; Choplin, Jessica M.
2010-01-01
Three experiments investigated the impact of anchors on students' estimates of personal alcohol consumption to better understand the role that this form of bias might have in social norm intervention programs. Experiments I and II found that estimates of consumption were susceptible to anchoring effects when an open-answer and a scale-response…
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
Potential estimates for the p-Laplace system with data in divergence form
NASA Astrophysics Data System (ADS)
Cianchi, A.; Schwarzacher, S.
2018-07-01
A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane
2016-05-01
Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.
The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF andmore » USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.« less
Hollis, Geoff; Westbury, Chris
2018-02-01
Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.
Liu, Jing; Zhou, Weidong; Juwono, Filbert H
2017-05-08
Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.
Utility of Inferential Norming with Smaller Sample Sizes
ERIC Educational Resources Information Center
Zhu, Jianjun; Chen, Hsin-Yi
2011-01-01
We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute…
Matera, Camilla; Nerini, Amanda; Baroni, Duccio; Stefanile, Cristina
2018-07-01
Through a 2 × 2 × 2 quasi experimental design (N = 254), this research investigated if a social campaign eliciting positive emotions and activating moral norms might enhance condom negotiation skills, intended and estimated condom among young women with or without past sexual experience with casual partners. Emotions had a main effect on one of the six condom negotiation strategies we considered; for most of the other variables an interaction effect with moral norms and/or past behaviour emerged. Concerning estimated condom use, positive emotions worked better than negative ones when moral norms were salient. With respect to negotiations skills, positive rather than negative emotions seemed more effective for women with past causal sexual experience. In women without this kind of experience, positive emotions seemed to work better when moral norms were salient. Moral norms had a main effect on negotiation self-efficacy, but not in the predicted direction: when moral norms were more salient women were found to be less confident about their negotiation ability. These results suggest that a message which makes moral norms salient should at the same time elicit positive emotions in order to be effective; moreover, messages should be carefully tailored according to women's past behaviour.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
Descriptive drinking norms in Native American and non-Hispanic White college students.
Hagler, Kylee J; Pearson, Matthew R; Venner, Kamilla L; Greenfield, Brenna L
2017-09-01
College students tend to overestimate how much their peers drink, which is associated with higher personal alcohol use. However, research has not yet examined if this phenomenon holds true among Native American (NA) college students. This study examined associations between descriptive norms and alcohol use/consequences in a sample of NA and non-Hispanic White (NHW) college students. NA (n=147, 78.6% female) and NHW (n=246, 67.8% female) undergraduates completed an online survey. NAs NHWs showed similar descriptive norms such that the "typical college student," "typical NA student," and "typical NHW student" were perceived to drink more than "best friends." "Best friends" descriptive norms (i.e., estimations of how many drinks per week were consumed by participants' best friends) were the most robust predictors of alcohol use/consequences. Effect size estimates of the associations between drinking norms and participants' alcohol use were consistently positive and ranged from r=0.25 to r=0.51 across the four reference groups. Negative binomial hurdle models revealed that all descriptive norms tended to predict drinking, and "best friends" drinking norms predicted alcohol consequences. Apart from one interaction effect, likely due to familywise error rate, these associations were not qualified by interactions with racial/ethnic group. We found similar patterns between NAs and NHWs both in the pattern of descriptive norms across reference groups and in the strength of associations between descriptive norms and alcohol use/consequences. Although these results suggest that descriptive norms operate similarly among NAs as other college students, additional research is needed to identify whether other norms (e.g., injunctive norms) operate similarly across NA and NHW students. Copyright © 2017. Published by Elsevier Ltd.
Hummer, Justin F.; LaBrie, Joseph W.; Lac, Andrew; Sessoms, Ashley; Cail, Jessica
2012-01-01
Reflective opposite sex norms are behavior that an individual believes the opposite sex prefers them to do. The current study extends research on this recently introduced construct by examining estimates and influences of reflective norms on drinking in a large high-risk heterosexual sample of male and female college students from two universities. Both gender and Greek-affiliation served as potential statistical moderators of the reflective norms and drinking relationship. All participants (N = 1790; 57% female) answered questions regarding the amount of alcohol they believe members of the opposite sex would like their opposite sex friends, dates, and sexual partners to drink. Participants also answered questions regarding their actual preferences for drinking levels in each of these three relationship categories. Overall, women overestimated how much men prefer their female friends and potential sexual partners to drink, whereas men overestimated how much women prefer their sexual partners to drink. Greek-affiliated males demonstrated higher reflective norms than non-Greek males across all relationship categories, and for dating partners, only Greek-affiliated males misperceived women’s actual preferences. Among women however, there were no differences between reflective norms estimates or the degree of misperception as a function of Greek status. Most importantly, over and above perceived same-sex social norms, higher perceived reflective norms tended to account for greater variance in alcohol consumption for Greeks (vs. non-Greeks) and males (vs. females), particularly within the friend and sexual partner contexts. The findings highlight that potential benefits might arise if existing normative feedback interventions were augmented with reflective normative feedback designed to target the discrepancy between perceived and actual drinking preferences of the opposite sex. PMID:22305289
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
On the structure of critical energy levels for the cubic focusing NLS on star graphs
NASA Astrophysics Data System (ADS)
Adami, Riccardo; Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego
2012-05-01
We provide information on a non-trivial structure of phase space of the cubic nonlinear Schrödinger (NLS) on a three-edge star graph. We prove that, in contrast to the case of the standard NLS on the line, the energy associated with the cubic focusing Schrödinger equation on the three-edge star graph with a free (Kirchhoff) vertex does not attain a minimum value on any sphere of constant L2-norm. We moreover show that the only stationary state with prescribed L2-norm is indeed a saddle point.
Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira
2012-05-01
Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
On Bernstein type inequalities and a weighted Chebyshev approximation problem on ellipses
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
A classical inequality due to Bernstein which estimates the norm of polynomials on any given ellipse in terms of their norm on any smaller ellipse with the same foci is examined. For the uniform and a certain weighted uniform norm, and for the case that the two ellipses are not too close, sharp estimates of this type were derived and the corresponding extremal polynomials were determined. These Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Some new results were also presented for a weighted approximation problem of this type.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Perceived Norms and Social Values to Capture School Culture in Elementary and Middle School
ERIC Educational Resources Information Center
Galvan, Adriana; Spatzier, Agnieszka; Juvonen, Jaana
2011-01-01
The current study was designed to gain insights into shifting school culture by examining perceived peer group norms and social values across elementary and middle school grades. Perceived norms were assessed by asking participants (N = 605) to estimate how many grade mates were academically engaged, disengaged, and antisocial. To capture social…
Park, Hee Sun; Smith, Sandi W; Klein, Katherine A; Martell, Dennis
2011-05-01
Social norms campaigns, which are based on correcting misperceptions of alcohol consumption, have frequently been applied to reduce college students' alcohol consumption. This study examined estimation and accuracy of normative perceptions for students during everyday drinking occasions. Students who reported having 4 or fewer drinks underestimated the percentage of other students who had 4 or fewer drinks, while those who drank 5 or more drinks overestimated the percentage of other students who had 5 or more drinks. Believability of advertisements featured in social norms campaigns also played a crucial role in this process. Those who believed the ad more closely estimated alcohol consumption by their peers while ad believability moderated the relation between drinking behaviors and accuracy.
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
Polonec, Lindsey D; Major, Ann Marie; Atwood, L Erwin
2006-01-01
In an effort to reduce dangerous drinking levels among college students, university health educators have initiated social norms campaigns based on the rationale that students will be more likely to reduce their own drinking behaviors if they think that most students on campus are not heavy or binge drinkers. Within the framework of social comparisons theory, this study reports the findings of a survey of 277 college students and explores the correlates of accuracy and bias in students' estimates of whether or not most other students think that binge drinking on campus is a problem and whether or not most other students believe the campaign message. The overwhelming majority (72.6%) of students did not believe the norms message that most students on campus drink "0 to 4" drinks when they party, and 52.7% reported drinking "5 or more" drinks in a sitting. The social norms campaign was effective in motivating 61% of the respondents to think about binge drinking as a problem. For the most part, group or social network norms were more influential on students' own drinking behavior than were their estimates of the campus drinking norm. The findings also clarify that accuracy in estimating the campus social norm in and of itself does not necessarily lead to an increase or reduction in alcohol consumption. The social comparisons approach underscores the complex and social nature of human interaction and reinforces the need for the development of multiple approaches to alcohol education with messages that are designed to target the specific needs of students based on their orientations toward alcohol consumption.
Lande, Russell
2009-07-01
Adaptation to a sudden extreme change in environment, beyond the usual range of background environmental fluctuations, is analysed using a quantitative genetic model of phenotypic plasticity. Generations are discrete, with time lag tau between a critical period for environmental influence on individual development and natural selection on adult phenotypes. The optimum phenotype, and genotypic norms of reaction, are linear functions of the environment. Reaction norm elevation and slope (plasticity) vary among genotypes. Initially, in the average background environment, the character is canalized with minimum genetic and phenotypic variance, and no correlation between reaction norm elevation and slope. The optimal plasticity is proportional to the predictability of environmental fluctuations over time lag tau. During the first generation in the new environment the mean fitness suddenly drops and the mean phenotype jumps towards the new optimum phenotype by plasticity. Subsequent adaptation occurs in two phases. Rapid evolution of increased plasticity allows the mean phenotype to closely approach the new optimum. The new phenotype then undergoes slow genetic assimilation, with reduction in plasticity compensated by genetic evolution of reaction norm elevation in the original environment.
Robust k-mer frequency estimation using gapped k-mers
Ghandi, Mahmoud; Mohammad-Noori, Morteza
2013-01-01
Oligomers of fixed length, k, commonly known as k-mers, are often used as fundamental elements in the description of DNA sequence features of diverse biological function, or as intermediate elements in the constuction of more complex descriptors of sequence features such as position weight matrices. k-mers are very useful as general sequence features because they constitute a complete and unbiased feature set, and do not require parameterization based on incomplete knowledge of biological mechanisms. However, a fundamental limitation in the use of k-mers as sequence features is that as k is increased, larger spatial correlations in DNA sequence elements can be described, but the frequency of observing any specific k-mer becomes very small, and rapidly approaches a sparse matrix of binary counts. Thus any statistical learning approach using k-mers will be susceptible to noisy estimation of k-mer frequencies once k becomes large. Because all molecular DNA interactions have limited spatial extent, gapped k-mers often carry the relevant biological signal. Here we use gapped k-mer counts to more robustly estimate the ungapped k-mer frequencies, by deriving an equation for the minimum norm estimate of k-mer frequencies given an observed set of gapped k-mer frequencies. We demonstrate that this approach provides a more accurate estimate of the k-mer frequencies in real biological sequences using a sample of CTCF binding sites in the human genome. PMID:23861010
Robust k-mer frequency estimation using gapped k-mers.
Ghandi, Mahmoud; Mohammad-Noori, Morteza; Beer, Michael A
2014-08-01
Oligomers of fixed length, k, commonly known as k-mers, are often used as fundamental elements in the description of DNA sequence features of diverse biological function, or as intermediate elements in the constuction of more complex descriptors of sequence features such as position weight matrices. k-mers are very useful as general sequence features because they constitute a complete and unbiased feature set, and do not require parameterization based on incomplete knowledge of biological mechanisms. However, a fundamental limitation in the use of k-mers as sequence features is that as k is increased, larger spatial correlations in DNA sequence elements can be described, but the frequency of observing any specific k-mer becomes very small, and rapidly approaches a sparse matrix of binary counts. Thus any statistical learning approach using k-mers will be susceptible to noisy estimation of k-mer frequencies once k becomes large. Because all molecular DNA interactions have limited spatial extent, gapped k-mers often carry the relevant biological signal. Here we use gapped k-mer counts to more robustly estimate the ungapped k-mer frequencies, by deriving an equation for the minimum norm estimate of k-mer frequencies given an observed set of gapped k-mer frequencies. We demonstrate that this approach provides a more accurate estimate of the k-mer frequencies in real biological sequences using a sample of CTCF binding sites in the human genome.
Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R
2007-03-01
The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Hedeker, D; Flay, B R; Petraitis, J
1996-02-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
Scale of reference bias and the evolution of health.
Groot, Wim
2003-09-01
The analysis of subjective measures of well-being-such as self-reports by individuals about their health status is frequently hampered by the problem of scale of reference bias. A particular form of scale of reference bias is age norming. In this study we corrected for scale of reference bias by allowing for individual specific effects in an equation on subjective health. A random effects ordered response model was used to analyze scale of reference bias in self-reported health measures. The results indicate that if we do not control for unobservable individual specific effects, the response to a subjective health state measure suffers from age norming. Age norming can be controlled for by a random effects estimation technique using longitudinal data. Further, estimates are presented on the rate of depreciation of health. Finally, simulations of life expectancy indicate that the estimated model provides a reasonably good fit of the true life expectancy.
High resolution beamforming on large aperture vertical line arrays: Processing synthetic data
NASA Astrophysics Data System (ADS)
Tran, Jean-Marie Q.; Hodgkiss, William S.
1990-09-01
This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.
Sun-Direction Estimation Using a Partially Underdetermined Set of Coarse Sun Sensors
NASA Astrophysics Data System (ADS)
O'Keefe, Stephen A.; Schaub, Hanspeter
2015-09-01
A comparison of different methods to estimate the sun-direction vector using a partially underdetermined set of cosine-type coarse sun sensors (CSS), while simultaneously controlling the attitude towards a power-positive orientation, is presented. CSS are commonly used in performing power-positive sun-pointing and are attractive due to their relative inexpensiveness, small size, and reduced power consumption. For this study only CSS and rate gyro measurements are available, and the sensor configuration does not provide global triple coverage required for a unique sun-direction calculation. The methods investigated include a vector average method, a combination of least squares and minimum norm criteria, and an extended Kalman filter approach. All cases are formulated such that precise ground calibration of the CSS is not required. Despite significant biases in the state dynamics and measurement models, Monte Carlo simulations show that an extended Kalman filter approach, despite the underdetermined sensor coverage, can provide degree-level accuracy of the sun-direction vector both with and without a control algorithm running simultaneously. If no rate gyro measurements are available, and rates are partially estimated from CSS, the EKF performance degrades as expected, but is still able to achieve better than 10∘ accuracy using only CSS measurements.
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
Rimal, Rajiv N
2008-01-01
Informed by the theory of normative social behavior, this article sought to determine the underlying mediating and moderating factors in the relationship between descriptive norms and behavioral intentions. Furthermore, the theory was extended by asking whether and what role behavioral identity played in normative influences. Simulating the central message of norms-based interventions to reduce college students' alcohol consumption, in this field experiment, descriptive norms were manipulated by informing half of the students (n = 665) that their peers consumed less alcohol than they might believe. Others (n = 672) were not provided any norms information. students' injunctive norms, outcome expectations, group identity, behavioral identity, and behavioral intention surrounding alcohol consumption were then measured. Exposure to the low-norms information resulted in a significant drop in estimates of the prevalence of consumption. Injunctive norms and outcome expectations partially mediated and also moderated the relationship between descriptive norms and behavioral intentions. Group identity and behavioral identity also moderated the relationship between descriptive norms and behavioral intentions, but the effect size was relatively small for group identity. Implications for health campaigns are also discussed.
NASA Astrophysics Data System (ADS)
Liu, Qiao
2015-06-01
In recent paper [7], Y. Du and K. Wang (2013) proved that the global-in-time Koch-Tataru type solution (u, d) to the n-dimensional incompressible nematic liquid crystal flow with small initial data (u0, d0) in BMO-1 × BMO has arbitrary space-time derivative estimates in the so-called Koch-Tataru space norms. The purpose of this paper is to show that the Koch-Tataru type solution satisfies the decay estimates for any space-time derivative involving some borderline Besov space norms.
Injunctive Norms and Alcohol Consumption: A Revised Conceptualization
Krieger, Heather; Neighbors, Clayton; Lewis, Melissa A.; LaBrie, Joseph W.; Foster, Dawn W.; Larimer, Mary E.
2016-01-01
Background Injunctive norms have been found to be important predictors of behaviors in many disciplines with the exception of alcohol research. This exception is likely due to a misconceptualization of injunctive norms for alcohol consumption. To address this, we outline and test a new conceptualization of injunctive norms and personal approval for alcohol consumption. Traditionally, injunctive norms have been assessed using Likert scale ratings of approval perceptions, whereas descriptive norms and individual behaviors are typically measured with behavioral estimates (i.e., number of drinks consumed per week, frequency of drinking, etc.). This makes comparisons between these constructs difficult because they are not similar conceptualizations of drinking behaviors. The present research evaluated a new representation of injunctive norms with anchors comparable to descriptive norms measures. Methods A study and a replication were conducted including 2,559 and 1,189 undergraduate students from three different universities. Participants reported on their alcohol-related consumption behaviors, personal approval of drinking, and descriptive and injunctive norms. Personal approval and injunctive norms were measured using both traditional measures and a new drink-based measure. Results Results from both studies indicated that drink-based injunctive norms were uniquely and positively associated with drinking whereas traditionally assessed injunctive norms were negatively associated with drinking. Analyses also revealed significant unique associations between drink-based injunctive norms and personal approval when controlling for descriptive norms. Conclusions These findings provide support for a modified conceptualization of personal approval and injunctive norms related to alcohol consumption and, importantly, offers an explanation and practical solution for the small and inconsistent findings related to injunctive norms and drinking in past studies. PMID:27030295
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
Carroll, Suzanne J; Paquet, Catherine; Howard, Natasha J; Coffee, Neil T; Adams, Robert J; Taylor, Anne W; Niyonsenga, Theo; Daniel, Mark
2017-02-02
Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA 1c ), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA 1c were moderated by food resource availability. HbA 1c , representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m 2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA 1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA 1c . Neither fast-food nor healthful food resource availability were associated with change in HbA 1c . Greater healthful food resource availability reduced the rate of increase in HbA 1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA 1c . Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA 1c . Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Yu, Qiang; Hu, Yong-Jun; Miao, Chang-Xin
2017-11-01
Current research in broken rotor bar (BRB) fault detection in induction motors is primarily focused on a high-frequency resolution analysis of the stator current. Compared with a discrete Fourier transformation, the parametric spectrum estimation technique has a higher frequency accuracy and resolution. However, the existing detection methods based on parametric spectrum estimation cannot realize online detection, owing to the large computational cost. To improve the efficiency of BRB fault detection, a new detection method based on the min-norm algorithm and least square estimation is proposed in this paper. First, the stator current is filtered using a band-pass filter and divided into short overlapped data windows. The min-norm algorithm is then applied to determine the frequencies of the fundamental and fault characteristic components with each overlapped data window. Next, based on the frequency values obtained, a model of the fault current signal is constructed. Subsequently, a linear least squares problem solved through singular value decomposition is designed to estimate the amplitudes and phases of the related components. Finally, the proposed method is applied to a simulated current and an actual motor, the results of which indicate that, not only parametric spectrum estimation technique.
One-norm geometric quantum discord and critical point estimation in the XY spin chain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Chang-Cheng; Wang, Yao; Guo, Jin-Liang, E-mail: guojinliang80@163.com
2016-11-15
In contrast with entanglement and quantum discord (QD), we investigate the thermal quantum correlation in terms of Schatten one-norm geometric quantum discord (GQD) in the XY spin chain, and analyze their capabilities in detecting the critical point of quantum phase transition. We show that the one-norm GQD can reveal more properties about quantum correlation between two spins, especially for the long-range quantum correlation at finite temperature. Under the influences of site distance, anisotropy and temperature, one-norm GQD and its first derivative make it possible to detect the critical point efficiently for a general XY spin chain. - Highlights: • Comparingmore » with entanglement and QD, one-norm GQD is more robust versus the temperature. • One-norm GQD is more efficient in characterization of long-range quantum correlation between two distant qubits. • One-norm GQD performs well in highlighting the critical point of QPT at zero or low finite temperature. • One-norm GQD has a number of advantages over QD in detecting the critical point of the spin chain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array.
Wang, Qi; Wang, Yingmin; Zhu, Guolei
2016-12-30
The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.
Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array
Wang, Qi; Wang, Yingmin; Zhu, Guolei
2016-01-01
The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective. PMID:28042828
Descriptive Drinking Norms: For Whom Does Reference Group Matter?*
Larimer, Mary E.; Neighbors, Clayton; LaBrie, Joseph W.; Atkins, David C.; Lewis, Melissa A.; Lee, Christine M.; Kilmer, Jason R.; Kaysen, Debra L.; Pedersen, Eric r.; Montoya, Heidi; Hodge, Kimberley; Desai, Sruti; Hummer, Justin F.; Walter, Theresa
2011-01-01
Objective: Perceived descriptive drinking norms often differ from actual norms and are positively related to personal consumption. However, it is not clear how normative perceptions vary with specificity of the reference group. Are drinking norms more accurate and more closely related to drinking behavior as reference group specificity increases? Do these relationships vary as a function of participant demographics? The present study examined the relationship between perceived descriptive norms and drinking behavior by ethnicity (Asian or White), sex, and fraternity/sorority status. Method: Participants were 2,699 (58% female) White (75%) or Asian (25%) undergraduates from two universities who reported their own alcohol use and perceived descriptive norms for eight reference groups: "typical student"; same sex, ethnicity, or fraternity/sorority status; and all combinations of these three factors. Results: Participants generally reported the highest perceived norms for the most distal reference group (typical student), with perceptions becoming more accurate as individuals' similarity to the reference group increased. Despite increased accuracy, participants perceived that all reference groups drank more than was actually the case. Across specific subgroups (fraternity/sorority members and men) different patterns emerged. Fraternity/sorority members reliably reported higher estimates of drinking for reference groups that included fraternity/ sorority status, and, to a lesser extent, men reported higher estimates for reference groups that included men. Conclusions: The results suggest that interventions targeting normative misperceptions may need to provide feedback based on participant demography or group membership. Although reference group-specific feedback may be important for some subgroups, typical student feedback provides the largest normative discrepancy for the majority of students. PMID:21906510
An Improved Measure of Cognitive Salience in Free Listing Tasks: A Marshallese Example
ERIC Educational Resources Information Center
Robbins, Michael C.; Nolan, Justin M.; Chen, Diana
2017-01-01
A new free-list measure of cognitive salience, B', is presented, which includes both list position and list frequency. It surpasses other extant measures by being normed to vary between a maximum of 1 and a minimum of 0, thereby making it useful for comparisons irrespective of list length or number of respondents. An illustration of its…
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
2014-01-01
Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895
Han, Fang; Liu, Han
2017-02-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Eek, Elsy; Holmqvist, Lotta Widén; Sommerfeld, Disa K
2012-07-01
There is a lack of standardized and quantifiable measures of touch function, for clinical work. Furthermore, it is not possible to make accurate diagnostic judgments of touch function before normative values are estimated. The objectives of this study were to establish adult norms of the perceptual threshold of touch (PTT) for the hands and feet according to age and gender and to determine the effect of right/left side, handedness, height, weight, and body mass index (BMI) on the PTT. The PTT was assessed by using a high-frequency transcutaneous electrical nerve stimulator (Hf/TENS) with self-adhesive skin electrodes in 346 adults. The PTT was identified as the level registered in mA at which the participants perceived a tingling sensation. The PTT for all participants was a median of 3.75 mA (range 2.50-7.25) in the hands and a median of 10.00 (range 5.00-30.00) in the feet. With increasing age an increase of the PTT was found. Men reported higher PTT than women. The right hand had higher PTT than the left. Handedness, height, weight, and BMI did not affect the PTT. Adult norms of the PTT in the hands for age, gender, and right/left side are presented for four age groups. The present study's estimate of the PTT in the hands could be used as adult norms. Adult norms for the feet could not be estimated because the PTT values in the feet showed a great variance.
Li, Tao; Wang, Jing; Lu, Miao; Zhang, Tianyi; Qu, Xinyun; Wang, Zhezhi
2017-01-01
Due to its sensitivity and specificity, real-time quantitative PCR (qRT-PCR) is a popular technique for investigating gene expression levels in plants. Based on the Minimum Information for Publication of Real-Time Quantitative PCR Experiments (MIQE) guidelines, it is necessary to select and validate putative appropriate reference genes for qRT-PCR normalization. In the current study, three algorithms, geNorm, NormFinder, and BestKeeper, were applied to assess the expression stability of 10 candidate reference genes across five different tissues and three different abiotic stresses in Isatis indigotica Fort. Additionally, the IiYUC6 gene associated with IAA biosynthesis was applied to validate the candidate reference genes. The analysis results of the geNorm, NormFinder, and BestKeeper algorithms indicated certain differences for the different sample sets and different experiment conditions. Considering all of the algorithms, PP2A-4 and TUB4 were recommended as the most stable reference genes for total and different tissue samples, respectively. Moreover, RPL15 and PP2A-4 were considered to be the most suitable reference genes for abiotic stress treatments. The obtained experimental results might contribute to improved accuracy and credibility for the expression levels of target genes by qRT-PCR normalization in I. indigotica. PMID:28702046
An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-09-01
Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Superconducting transition temperature of a boron nitride layer with a high niobium coverage.
NASA Astrophysics Data System (ADS)
Vazquez, Gerardo; Magana, Fernando
We explore the possibility of inducing superconductivity in a Boron Nitride (BN) sheet, by doping its surface with Nb atoms sitting on the center of the hexagons. We used first-principles density functional theory in the general gradient approximation. The Quantum-Espresso package was used with norm conserving pseudo potentials. The structure considered was relaxed to their minimum energy configuration. Phonon frequencies were calculated using the linear-response technique on several phonon wave-vector meshes. The electron-phonon coupling parameter was calculated for a number of k meshes. The superconducting critical temperature was estimated using the Allen-Dynes formula with μ* = 0.1 - 0.15. We note that Nb is a good candidate material to show a superconductor transition for the BN-metal system. We thank Dirección General de Asuntos del Personal Académico de la Universidad Nacional Autónoma de México, partial financial support by Grant IN-106514 and we also thank Miztli Super-Computing center the technical assistance.
Estimation of Occupational Test Norms from Job Analysis Data.
ERIC Educational Resources Information Center
Mecham, Robert C.
Occupational norms exist for some tests, and differences in the distributions of test scores by occupation are evident. Sampling error (SE), situationally specific factors (SSFs), and differences in job content (DIJCs) were explored as possible reasons for the observed differences. SE was explored by analyzing 742 validity studies performed by the…
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
Requirements for Coregistration Accuracy in On-Scalp MEG.
Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri
2018-06-22
Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.
Estimation and Control with Relative Measurements: Algorithms and Scaling Laws
2007-09-01
eigenvector of L −1 corre- sponding to its largest eigenvalue. Since L−1 is a positive matrix, Perron - Frobenius theory tells us that |u1| := {|u11...the Frobenius norm of a matrix, and a linear vector space SV as the space of all bounded node-functions with respect to the above defined 144 norm...je‖2F where Eu is the set edges in E that are incident on u. It can be shown from the relationship between the Frobenius norm and the singular
NASA Technical Reports Server (NTRS)
Haering, E. A., Jr.; Burcham, F. W., Jr.
1984-01-01
A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
Han, Fang; Liu, Han
2016-01-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson’s sample correlation matrix. Although Pearson’s sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall’s tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall’s tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall’s tau correlation matrix and the latent Pearson’s correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of “effective rank” in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a “sign subgaussian condition” which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition. PMID:28337068
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Van Assche, Jasper; Asbrock, Frank; Roets, Arne; Kauff, Mathias
2018-05-01
Positive neighborhood norms, such as strong local networks, are critical to people's satisfaction with, perceived disadvantage of, and intentions to stay in their neighborhood. At the same time, local ethnic diversity is said to be detrimental for these community outcomes. Integrating both frameworks, we tested whether the negative consequences of diversity occur even when perceived social norms are positive. Study 1 ( N = 1,760 German adults) showed that perceptions of positive neighborhood norms buffered against the effects of perceived diversity on moving intentions via neighborhood satisfaction and perceived neighborhood disadvantage. Study 2 ( N = 993 Dutch adults) replicated and extended this moderated mediation model using other characteristics of diversity (i.e., objective and estimated minority proportions). Multilevel analyses again revealed consistent buffering effects of positive neighborhood norms. Our findings are discussed in light of the ongoing public and political debate concerning diversity and social and communal life.
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
Attitude Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.
Oude Mulders, Jaap; Henkens, Kène; Schippers, Joop
2017-10-01
Top managers guide organizational strategy and practices, but their role in the employment of older workers is understudied. We study the effects that age-related workplace norms of top managers have on organizations' recruitment and retention practices regarding older workers. We investigate two types of age-related workplace norms, namely age equality norms (whether younger and older workers should be treated equally) and retirement age norms (when older workers are expected to retire) while controlling for organizational and national contexts. Data collected among top managers of 1,088 organizations from six European countries were used for the study. Logistic regression models were run to estimate the effects of age-related workplace norms on four different organizational outcomes: (a) recruiting older workers, (b) encouraging working until normal retirement age, (c) encouraging working beyond normal retirement age, and (d) rehiring retired former employees. Age-related workplace norms of top managers affect their organizations' practices, but in different ways. Age equality norms positively affect practices before the boundary of normal retirement age (Outcomes a and b), whereas retirement age norms positively affect practices after the boundary of normal retirement age (Outcomes c and d). Changing age-related workplace norms of important actors in organizations may be conducive to better employment opportunities and a higher level of employment participation of older workers. However, care should be taken to target the right types of norms, since targeting different norms may yield different outcomes. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Time-Indexed Effect Size Metric for K-12 Reading and Math Education Evaluation
ERIC Educational Resources Information Center
Lee, Jaekyung; Finn, Jeremy; Liu, Xiaoyan
2011-01-01
Through a synthesis of test publisher norms and national longitudinal datasets, this study provides new national norms of academic growth in K-12 reading and math that can be used to reinterpret conventional effect sizes in time units. We propose d' a time-indexed effect size metric to estimate how long it would take for an "untreated"…
Reliability and norms for the 10-item self-motivation inventory: The TIGER Study
USDA-ARS?s Scientific Manuscript database
The Self-Motivation Inventory (SMI) has been shown to be a predictor of exercise dropout. The original SMI of 40 items has been shortened to 10 items and the psychometric qualities of the 10-item SMI are not known. To estimate the reliability of a 10-item SMI and develop norms for an ethnically dive...
Autoregressive model in the Lp norm space for EEG analysis.
Li, Peiyang; Wang, Xurui; Li, Fali; Zhang, Rui; Ma, Teng; Peng, Yueheng; Lei, Xu; Tian, Yin; Guo, Daqing; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2015-01-30
The autoregressive (AR) model is widely used in electroencephalogram (EEG) analyses such as waveform fitting, spectrum estimation, and system identification. In real applications, EEGs are inevitably contaminated with unexpected outlier artifacts, and this must be overcome. However, most of the current AR models are based on the L2 norm structure, which exaggerates the outlier effect due to the square property of the L2 norm. In this paper, a novel AR object function is constructed in the Lp (p≤1) norm space with the aim to compress the outlier effects on EEG analysis, and a fast iteration procedure is developed to solve this new AR model. The quantitative evaluation using simulated EEGs with outliers proves that the proposed Lp (p≤1) AR can estimate the AR parameters more robustly than the Yule-Walker, Burg and LS methods, under various simulated outlier conditions. The actual application to the resting EEG recording with ocular artifacts also demonstrates that Lp (p≤1) AR can effectively address the outliers and recover a resting EEG power spectrum that is more consistent with its physiological basis. Copyright © 2014 Elsevier B.V. All rights reserved.
Kim, Minzee; Longhofer, Wesley; Boyle, Elizabeth Heger; Nyseth, Hollie
2014-01-01
Using the case of adolescent fertility, we ask the questions of whether and when national laws have an effect on outcomes above and beyond the effects of international law and global organizing. To answer these questions, we utilize a fixed-effect time-series regression model to analyze the impact of minimum-age-of-marriage laws in 115 poor- and middle-income countries from 1989 to 2007. We find that countries with strict laws setting the minimum age of marriage at 18 experienced the most dramatic decline in rates of adolescent fertility. Trends in countries that set this age at 18 but allowed exceptions (for example, marriage with parental consent) were indistinguishable from countries that had no such minimum-age-of-marriage law. Thus, policies that adhere strictly to global norms are more likely to elicit desired outcomes. The article concludes with a discussion of what national law means in a diffuse global system where multiple actors and institutions make the independent effect of law difficult to identify. PMID:25525281
Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu
2002-01-01
A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.
Natural selection and genetic variation for reproductive reaction norms in a wild bird population.
Brommer, Jon E; Merilä, Juha; Sheldon, Ben C; Gustafsson, Lars
2005-06-01
Many morphological and life-history traits show phenotypic plasticity that can be described by reaction norms, but few studies have attempted individual-level analyses of reaction norms in the wild. We analyzed variation in individual reaction norms between laying date and three climatic variables (local temperature, local rainfall, and North Atlantic Oscillation) of 1126 female collared flycatchers (Ficedula albicollis) with a restricted maximum likehood linear mixed model approach using random-effect best linear unbiased predictor estimates for the elevation (i.e., expected laying date in the average environment) and slope (i.e., adjustment in laying date as a function of environment) of females' reaction norms. Variation in laying date was best explained by local temperature, and individual females differed in both the elevation and the slope of their laying date-temperature reaction norms. As revealed by animal model analyses, there was weak evidence for additive genetic variance of elevation (h2 +/- SE = 0.09 +/- 0.09), whereas there was no evidence for heritability of slope (h2 +/- SE = 0.00 +/- 0.01). Selection analysis, using a female's lifetime production of fledglings or recruits as an estimate of her fitness, revealed significant selection for a lower phenotypic value and breeding value for elevation (i.e., earlier laying date at the average temperature). There was selection for steeper phenotypic values of slope (i.e., greater plasticity in the adjustment of laying date to temperature), but no significant selection on the breeding values of slope. Although these results suggest that phenotypic laying date is influenced by additive genetic factors, as well as by an interaction with the environment, selection on plasticity would not produce an evolutionary response.
ERIC Educational Resources Information Center
Ott, Carol H.; Doyle, Lynn H.
2005-01-01
According to social norms theory, when high school students overestimate the use of alcohol, tobacco, and other drugs (ATOD) by their peers, they tend to use more themselves. The purpose of this study was to determine whether these over estimations (misperceptions) could be corrected through a similar age peer-to-peer interactive social norms…
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates
Curtis, Caroline A.; Bradley, Bethany A.
2016-01-01
Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859
First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity
NASA Technical Reports Server (NTRS)
Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.
Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory
NASA Astrophysics Data System (ADS)
Nagornyi, V. D.; Svitlov, S.; Araya, A.
2016-04-01
Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3 < p < 4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.
Athletic identity, descriptive norms, and drinking among athletes transitioning to college
Grossbard, Joel R.; Geisner, Irene M.; Mastroleo, Nadine R.; Kilmer, Jason R.; Turrisi, Rob; Larimer, Mary E.
2010-01-01
College student–athletes are at risk for heavy alcohol consumption and related consequences. The present study evaluated the influence of college student and college athlete descriptive norms and levels of athletic identity on drinking and related consequences among incoming college students attending two universities (N = 1119). Prior to the beginning of their first year of college, students indicating high school athletic participation completed assessments of athletic identity, alcohol consumption, drinking-related consequences, and normative perceptions of alcohol use. Estimations of drinking by college students and student–athletes were significantly greater than self-reported drinking. Athletic identity moderated associations among gender, perceived norms, drinking, and related consequences. Athlete-specific norms had a stronger effect on drinking among those reporting higher levels of athletic identity, and higher levels of athletic identity exclusively protected males from experiencing drinking-related consequences. Implications of the role of athletic identity in the development of social norms interventions targeted at high school athletes transitioning to college are discussed. PMID:19095359
NASA Astrophysics Data System (ADS)
Kamagara, Abel; Wang, Xiangzhao; Li, Sikun
2018-03-01
We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.
2011-01-01
Background The identification of genes or quantitative trait loci that are expressed in response to different environmental factors such as temperature and light, through functional mapping, critically relies on precise modeling of the covariance structure. Previous work used separable parametric covariance structures, such as a Kronecker product of autoregressive one [AR(1)] matrices, that do not account for interaction effects of different environmental factors. Results We implement a more robust nonparametric covariance estimator to model these interactions within the framework of functional mapping of reaction norms to two signals. Our results from Monte Carlo simulations show that this estimator can be useful in modeling interactions that exist between two environmental signals. The interactions are simulated using nonseparable covariance models with spatio-temporal structural forms that mimic interaction effects. Conclusions The nonparametric covariance estimator has an advantage over separable parametric covariance estimators in the detection of QTL location, thus extending the breadth of use of functional mapping in practical settings. PMID:21269481
Effects of aging on neuromagnetic mismatch responses to pitch changes.
Cheng, Chia-Hsiung; Baillet, Sylvain; Hsiao, Fu-Jung; Lin, Yung-Yang
2013-06-07
Although aging-related alterations in the auditory sensory memory and involuntary change discrimination have been widely studied, it remains controversial whether the mismatch negativity (MMN) or its magnetic counterpart (MMNm) is modulated by physiological aging. This study aimed to examine the effects of aging on mismatch activity to pitch deviants by using a whole-head magnetoencephalography (MEG) together with distributed source modeling analysis. The neuromagnetic responses to oddball paradigms consisting of standards (1000 Hz, p=0.85) and deviants (1100 Hz, p=0.15) were recorded in healthy young (n=20) and aged (n=18) male adults. We used minimum norm estimate of source reconstruction to characterize the spatiotemporal neural dynamics of MMNm responses. Distributed activations to MMNm were identified in the bilateral fronto-temporo-parietal areas. Compared to younger participants, the elderly exhibited a significant reduction of cortical activation in bilateral superior temporal guri, superior temporal sulci, inferior fontal gyri, orbitofrontal cortices and right inferior parietal lobules. In conclusion, our results suggest an aging-related decline in auditory sensory memory and automatic change detection as indexed by MMNm. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Fruit and Vegetable Attitudes, Norms, and Intake in Low-Income Youth.
Di Noia, Jennifer; Cullen, Karen Weber
2015-12-01
Fruit and vegetable (FV) attitudes and norms have been shown to influence intake in youth; yet research with low-income youth and studies supplementing self-report with objective measures of intake are lacking. Cross-sectional survey data on self-rated FV intake, FV attitudes, and FV norms were collected in a sample of 116 youth attending a residential summer camp serving low-income families. FV intake also was estimated by direct observation. Differences between self-rated and observed FV intake, perceived and observed peer intake, and perceived and peer-reported attitudes toward eating FVs were assessed with paired samples t tests. The role of FV attitudes, descriptive norms (perceived peer FV intake), injunctive norms (perceived peer attitudes toward eating FVs), and actual norms (observed peer FV intake and peer-reported FV attitudes) in predicting FV intake also was examined with multiple regression analysis. Youth misperceived their own and their peers' FV intake (i.e., overestimated intake of fruit and underestimated intake of vegetables) and believed that peers held less favorable attitudes toward eating FVs than was the case. The models predicting self-rated intake were significant, accounting for 34% of the variance in fruit intake and 28% of the variance in vegetable intake. Attitudes and descriptive norms were positively associated with FV intake, and observed peer fruit intake was negatively associated with fruit intake. Findings suggest that in low-income youth, FV attitudes, descriptive norms, and normative peer behavior predict perceived but not actual intake. Youth may benefit from intervention to promote favorable FV attitudes and norms. A focus on descriptive norms holds promise for improving self-rated intake in this population. © 2015 Society for Public Health Education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
Geodesy in Antarctica: A pilot study based on the TAMDEF GPS network, Victoria Land, Antarctica
NASA Astrophysics Data System (ADS)
Vazquez Becerra, Guadalupe Esteban
The objective of the research presented in this dissertation is a combination of practical and theoretical problems to investigate unique aspects of GPS (Global Positioning System) geodesy in Antarctica. This is derived from a complete analysis of a GPS network called TAMDEF (Trans Antarctic Mountains Deformation), located in Victoria Land, Antarctica. In order to permit access to the International Terrestrial Reference Frame (ITRF), the McMurdo (MCM4) IGS (The International GNSS Service for Geodynamics, formerly the International GPS Service) site was adopted as part of the TAMDEF network. The following scientific achievements obtained from the cited analysis will be discussed as follows: (1) The GPS data processing for the TAMDEF network relied on the PAGES (Program for Adjustment of GPS Ephemerides) software that uses the double-differenced iono-free linear combination, which helps removing a big partial of bias (mm level) in the final positioning. (2) To validate the use of different antenna types in TAMDEF, an antenna testing experiment was conducted using the National Geodetic Survey (NGS) antenna calibration data, appropriate for each antenna type. Sub-daily and daily results from the antenna testing are at the sub-millimeter level, based on the fact that 24-hour solutions were used to average any possible bias. (3) A potential contributor that might have an impact on the TAMDEF stations positioning is the pseudorange multipath effect; thus, the root mean squared variations were estimated and analyzed in order to identify the most and least affected sites. MCM4 was found to be the site with highest multipath, and this is not good at all, since MCM4 is the primary ITRF access point for this part of Antarctica. Additionally, results from the pseudorange multipath can be used for further data cleaning to improve positioning results. (4) The Ocean Tide Modeling relied on the use of two models: CATS02.01 (Circum Antarctic Tidal Simulation) and TPXO6.2 (TOPEX/Poseidon) to investigate which model suits the Antarctic conditions best and its effect on the vertical coordinate component at the TAMDEF sites. (5) The scatter for the time-series results of the coordinate components for the TAMDEF sites are smaller when processed with respect to the Antarctic tectonic plate (Case I), in comparison with the other tectonic plates outside Antarctica (Case II-IV). Also, the seasonal effect due to the time-series seen in the TAMDEF sites with longer data span are site dependent; thus, data processing is not the reason for these effects. (6) Furthermore, the results coming from a homogeneous global network with coordinates referred and transformed to the ITRF2000 at epoch 2005.5 reflect the quality of the solution, obtained when processing TAMDEF network data with respect to the Antarctic tectonic plate. (7) An optimal data reduction strategy was developed, based on three different troposphere models and mapping functions, tested and used to estimate the total wet zenith delay (TWZD) which later was transformed to precipitable water vapor (PWV). PWV was estimated from GPS measurements and validated with a numerical weather model, AMPS (Antarctic Mesoscale Prediction System) and radiosonde PWV. Additionally, to validate the TWZD estimates at the MCM4 site before their conversion into the GPS PWV, these estimates were directly compared to TWZD computed by the CDDIS (Crustal Dynamics Data Information System) analysis center. (8) The results from the Least-Squares adjustment with Stochastic Constraints (SCLESS) as performed with PAGES are very comparable (mm-level) to those obtained from the alternative adjustment approaches: MINOLESS (Minimum-Norm Least-Squares adjustment); Partial-MINOLESS (Partial Minimum-Norm Least-Squares adjustment), and BLIMPBE (Best Linear Minimum Partial-Bias Estimation). Based on the applied network adjustment models within the Antarctic tectonic plate (Case I), it can be demonstrated that the GPS data used are clean of bias after proper care has been taken of ionosphere, troposphere, multipath, and some other sources that affect GPS positioning. Overall, it can be concluded that no suspected of bias was present in the obtained results, thus, GPS is indeed capable of capturing the signal which can be used for further geophysical interpretation within Antarctica.
Norton, Melanie K; Smith, Megan V; Magriples, Urania; Kershaw, Trace S
2016-09-01
This study examined the relationship between traditional masculine role norms (status, toughness, anti-femininity) and psychosocial mechanisms of sexual risk (sexual communication, sexual self-efficacy) among young, low-income, and minority parenting couples. Between 2007 and 2011, 296 pregnant adolescent females and their male partners were recruited from urban obstetrics clinics in Connecticut. Data regarding participants' beliefs in masculine role norms, frequency of general sex communication and sexual risk communication, and sexual self-efficacy were collected via computer-assisted self-interviews. Generalized estimating equation (GEE) models were used to test for actor effects (whether a person's masculine role norms at baseline influence the person's own psychosocial variables at 6-month follow-up) and partner effects (whether a partner's masculine role norms at baseline influence an actor's psychosocial variables at 6-month follow-up). Results revealed that higher actor status norms were significantly associated with more sexual self-efficacy, higher actor toughness norms were associated with less sexual self-efficacy, and higher actor anti-femininity norms were significantly associated with less general sex communication, sexual risk communication, and sexual self-efficacy. No partner effects were found. These results indicate a need for redefining masculine role norms through family centered approaches in pregnant or parenting adolescent couples to increase sexual communication and sexual self-efficacy. Further research is needed to understand partner effects in the context of a relationship and on subsequent sexual risk behavior. © Society for Community Research and Action 2016.
American Sign Language/English bilingual model: a longitudinal study of academic growth.
Lange, Cheryl M; Lane-Outlaw, Susan; Lange, William E; Sherwood, Dyan L
2013-10-01
This study examines reading and mathematics academic growth of deaf and hard-of-hearing students instructed through an American Sign Language (ASL)/English bilingual model. The study participants were exposed to the model for a minimum of 4 years. The study participants' academic growth rates were measured using the Northwest Evaluation Association's Measure of Academic Progress assessment and compared with a national-normed group of grade-level peers that consisted primarily of hearing students. The study also compared academic growth for participants by various characteristics such as gender, parents' hearing status, and secondary disability status and examined the academic outcomes for students after a minimum of 4 years of instruction in an ASL/English bilingual model. The findings support the efficacy of the ASL/English bilingual model.
A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Zhang, Guoyu; Huang, Chengming; Li, Meng
2018-04-01
We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.
The Role of Masculine Norms and Informal Support on Mental Health in Incarcerated Men
Iwamoto, Derek Kenji; Gordon, Derrick; Oliveros, Arazais; Perez-Cabello, Arturo; Brabham, Tamika; Lanza, Steve; Dyson, William
2012-01-01
Mental health problems, in general, and major depression in particular, are prevalent among incarcerated men. It is estimated that 23% of state inmates report experiencing symptoms of major depression. Despite the high rates of depressive symptoms, there is little understanding about the psychosocial factors that are associated with depressive and anxiety symptoms of incarcerated men. One factor relevant to the mental health of incarcerated men is their adherence to traditional masculine norms. We investigated the role of masculine norms and informal support on depressive and anxiety symptoms among 123 incarcerated men. The results revealed that adherence to the masculine norm of emotional control were negatively associated with depressive symptoms while heterosexual presentation and informal support were related to both depressive and anxiety symptoms. High levels of reported informal support moderated the effects of heterosexual presentation on depressive and anxiety symptoms. Public health and clinical implications are discussed. PMID:23139638
The Role of Masculine Norms and Informal Support on Mental Health in Incarcerated Men.
Iwamoto, Derek Kenji; Gordon, Derrick; Oliveros, Arazais; Perez-Cabello, Arturo; Brabham, Tamika; Lanza, Steve; Dyson, William
2012-07-01
Mental health problems, in general, and major depression in particular, are prevalent among incarcerated men. It is estimated that 23% of state inmates report experiencing symptoms of major depression. Despite the high rates of depressive symptoms, there is little understanding about the psychosocial factors that are associated with depressive and anxiety symptoms of incarcerated men. One factor relevant to the mental health of incarcerated men is their adherence to traditional masculine norms. We investigated the role of masculine norms and informal support on depressive and anxiety symptoms among 123 incarcerated men. The results revealed that adherence to the masculine norm of emotional control were negatively associated with depressive symptoms while heterosexual presentation and informal support were related to both depressive and anxiety symptoms. High levels of reported informal support moderated the effects of heterosexual presentation on depressive and anxiety symptoms. Public health and clinical implications are discussed.
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
Evaluating Level of Specificity of Normative Referents in Relation to Personal Drinking Behavior*
Larimer, Mary E.; Kaysen, Debra L.; Lee, Christine M.; Kilmer, Jason R.; Lewis, Melissa A.; Dillworth, Tiara; Montoya, Heidi D.; Neighbors, Clayton
2009-01-01
Objective: Research has found perceived descriptive norms to be one of the strongest predictors of college student drinking, and several intervention approaches have incorporated normative feedback to correct misperceptions of peer drinking behavior. Little research has focused on the role of the reference group in normative perceptions. The current study sought to examine whether normative perceptions vary based on specificity of the reference group and whether perceived norms for more specific reference-group norms are related to individual drinking behavior. Method: Participants were first-year undergraduates (n = 1,276, 58% female) randomly selected from a university list of incoming students. Participants reported personal drinking behavior and perceived descriptive norms for eight reference groups, including typical student; same gender, ethnicity, or residence; and combinations of those reference groups (e.g., same gender and residence). Results: Findings indicated that participants distinguished among different reference groups in estimating descriptive drinking norms. Moreover, results indicated misperceptions in drinking norms were evident at all levels of specificity of the reference group. Additionally, findings showed perceived norms for more specific groups were uniquely related to participants' own drinking. Conclusions: These results suggest that providing normative feedback targeting at least one level of specificity to the participant (i.e., beyond what the “typical” student does) may be an important tool in normative feedback interventions. PMID:19538919
Elmore, Kristen; Scull, Tracy M.; Kupersmidt, Janis B.
2016-01-01
Adolescents’ media environment offers information about who uses substances and what happens as a result—how youth interpret these messages likely determines their impact on normative beliefs about alcohol and tobacco use. The Message Interpretation Processing (MIP) theory predicts that substance use norms are influenced by cognitions associated with the interpretation of media messages. This cross-sectional study examined whether high school adolescents’ (n=817, 48% female, 64% white) media-related cognitions (i.e., similarity, realism, desirability, identification) were related to their perceptions of substance use norms. Results revealed that adolescents’ media-related cognitions explained a significant amount of variance in perceived social approval for and estimated prevalence of peer alcohol and tobacco use, above and beyond previous use and demographic covariates. Compared to prevalence norms, social approval norms were more closely related to adolescents’ media-related cognitions. Results suggest that critical thinking about media messages can inhibit normative perceptions that are likely to increase adolescents’ interest in alcohol and tobacco use. PMID:27837371
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.
Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim
2015-10-01
In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.
The Influence of Social Networking Photos on Social Norms and Sexual Health Behaviors
Jordan, Alexander H.
2013-01-01
Abstract Two studies tested whether online social networking technologies influence health behavioral social norms, and in turn, personal health behavioral intentions. In Study 1, experimental participants browsed peers' Facebook photos on a college network with a low prevalence of sexually suggestive content. Participants estimated the percentage of their peers who have sex without condoms, and rated their own future intentions to use condoms. Experimental participants, compared to controls who did not view photos, estimated that a larger percentage of their peers use condoms, and indicated a greater intention to use condoms themselves in the future. In Study 2, participants were randomly assigned to view sexually suggestive or nonsexually suggestive Facebook photos, and responded to sexual risk behavioral questions. Compared to participants viewing nonsuggestive photos, those who viewed sexually suggestive Facebook photos estimated that a larger percentage of their peers have unprotected sexual intercourse and sex with strangers and were more likely to report that they themselves would engage in these behaviors. Thus, online social networks can influence perceptions of the peer prevalence of sexual risk behaviors, and can influence users' own intentions with regard to such behaviors. These studies suggest the potential power of social networks to affect health behaviors by altering perceptions of peer norms. PMID:23438268
ERIC Educational Resources Information Center
Oakland, Thomas
New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…
SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu, E-mail: hatt@univ-brest.fr
Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basismore » was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.« less
Hauk, O; Keil, A; Elbert, T; Müller, M M
2002-01-30
We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.
Different Cortical Dynamics in Face and Body Perception: An MEG study
Meeren, Hanneke K. M.; de Gelder, Beatrice; Ahlfors, Seppo P.; Hämäläinen, Matti S.; Hadjikhani, Nouchine
2013-01-01
Evidence from functional neuroimaging indicates that visual perception of human faces and bodies is carried out by distributed networks of face and body-sensitive areas in the occipito-temporal cortex. However, the dynamics of activity in these areas, needed to understand their respective functional roles, are still largely unknown. We monitored brain activity with millisecond time resolution by recording magnetoencephalographic (MEG) responses while participants viewed photographs of faces, bodies, and control stimuli. The cortical activity underlying the evoked responses was estimated with anatomically-constrained noise-normalised minimum-norm estimate and statistically analysed with spatiotemporal cluster analysis. Our findings point to distinct spatiotemporal organization of the neural systems for face and body perception. Face-selective cortical currents were found at early latencies (120–200 ms) in a widespread occipito-temporal network including the ventral temporal cortex (VTC). In contrast, early body-related responses were confined to the lateral occipito-temporal cortex (LOTC). These were followed by strong sustained body-selective responses in the orbitofrontal cortex from 200–700 ms, and in the lateral temporal cortex and VTC after 500 ms latency. Our data suggest that the VTC region has a key role in the early processing of faces, but not of bodies. Instead, the LOTC, which includes the extra-striate body area (EBA), appears the dominant area for early body perception, whereas the VTC contributes to late and post-perceptual processing. PMID:24039712
Subjective age-of-acquisition norms for 600 Turkish words from four age groups.
Göz, İlyas; Tekcan, Ali I; Erciyes, Aslı Aktan
2017-10-01
The main purpose of this study was to report age-based subjective age-of-acquisition (AoA) norms for 600 Turkish words. A total of 115 children, 100 young adults, 115 middle-aged adults, and 127 older adults provided AoA estimates for 600 words on a 7-point scale. The intraclass correlations suggested high reliability, and the AoA estimates were highly correlated across the four age groups. Children gave earlier AoA estimates than the three adult groups; this was true for high-frequency as well as low-frequency words. In addition to the means and standard deviations of the AoA estimates, we report word frequency, concreteness, and imageability ratings, as well as word length measures (numbers of syllables and letters), for the 600 words as supplemental materials. The present ratings represent a potentially useful database for researchers working on lexical processing as well as other aspects of cognitive processing, such as autobiographical memory.
Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka
2009-06-01
A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2012 CFR
2012-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying
2015-04-30
Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.
Guimond, Fanny-Alexandra; Brendgen, Mara; Correia, Stephanie; Turgeon, Lyse; Vitaro, Frank
2018-06-21
This study examined the moderating role of classroom injunctive norms salience regarding social withdrawal and regarding aggression in the longitudinal association between these behaviors and peer victimization. A total of 1,769 fourth through sixth graders (895 girls, M = 10.25 years, SD = 1.03) from 23 schools (67 classrooms) completed a peer nomination inventory in the fall (T1) and spring (T2) of the same academic year. Participants circled the name of each student who fit the description provided for social withdrawal, aggression, and peer victimization at T1 and T2. The salience of injunctive norms was sex-specific and operationalized by the extent to which children displaying the behavior were socially rewarded or sanctioned by their classmates. Generalized estimation equations (GEE) showed that the association between social withdrawal at T1 and peer victimization at T2 was moderated by injunctive norms. Social withdrawal at T1 was positively associated with peer victimization at T2 in classrooms where injunctive norms for this behavior were salient and unfavorable, as well as in classrooms where injunctive norms for aggression were salient and favorable, albeit for girls only. The association between aggression at T1 and peer victimization at T2 was also moderated by the injunctive norms regarding this behavior. Aggressive children were less likely to be victimized in classrooms where this behavior was rewarded. These results support bullying interventions that target factors related to the larger peer context, including social norms. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Evolution equation for quantum coherence
Hu, Ming-Liang; Fan, Heng
2016-01-01
The estimation of the decoherence process of an open quantum system is of both theoretical significance and experimental appealing. Practically, the decoherence can be easily estimated if the coherence evolution satisfies some simple relations. We introduce a framework for studying evolution equation of coherence. Based on this framework, we prove a simple factorization relation (FR) for the l1 norm of coherence, and identified the sets of quantum channels for which this FR holds. By using this FR, we further determine condition on the transformation matrix of the quantum channel which can support permanently freezing of the l1 norm of coherence. We finally reveal the universality of this FR by showing that it holds for many other related coherence and quantum correlation measures. PMID:27382933
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Yu, Sheng-Tsung; Chang, Hsing-Yi; Yao, Kai-Ping; Lin, Yu-Hsuan; Hurng, Baai-Shyun
2015-10-01
The aim of this study was to examine the validity of the EuroQOL five dimensions questionnaire (EQ-5D) using a nationally representative data from the National Health Interview Survey (NHIS) through comparison with short-form 36 (SF-36). Data for this study came from the 2009 NHIS in Taiwan. The study sample was the 4007 participants aged 20-64 years who completed the survey. We used SUDAAN 10.0 (SAS-Callable) to carry out weighed estimation and statistical inference. The EQ index was estimated using norm values from a Taiwanese study as well as from Japan and the United Kingdom (UK). The SF-36 score was standardized using American norm values. In terms of concurrent validity, the EQ-5D met the five hypotheses. The results did not fulfill hypothesis that women would have lower visual analogue scale (EQ-VAS) scores. In terms of discriminant validity, the EQ-5D fulfilled two hypotheses. Our results approached but did not fulfill hypothesis that there would be a weak association between the physical and psychological dimensions of the EQ-5D and the mental component summary score of the SF-36. Results were comparable regardless of whether the Japanese or UK norm value sets were used. We were able to fulfill many, not all of our validity hypotheses regardless of whether the established Japanese or UK norm value sets or the Taiwanese norm values were used. The EQ-5D is an effective and simple instrument for assessing health-related quality of life of general population in Taiwan.
Perkins, Jessica M; Perkins, H Wesley; Craig, David W
2010-12-01
Research has shown that excess calories from sugar-sweetened beverages are associated with weight gain among youth. There is limited knowledge, however, regarding perception of sugar-sweetened beverage consumption norms. This study examined the extent of misperception about peer sugar-sweetened beverage consumption norms and the association of perceived peer norms with personal self-reported consumption. Among 3,831 6th- to 12th-grade students in eight schools who completed anonymous cross-sectional surveys between November 2008 and May 2009, students' personal perception of the daily sugar-sweetened beverage consumption norm in their school within their grade (School Grade group) was compared with aggregate self-reports of daily sugar-sweetened beverage consumption for each School Grade group. The median daily sugar-sweetened beverage consumption from personal reports was one beverage in 24 of 29 School Grade groups, two beverages in four School Grade groups, and three beverages in one School Grade group. Seventy-six percent of students overestimated the daily norm in their School Grade group, with 24% perceiving the norm to be at least three beverages or more per day. Fixed-effects multiple regression analysis showed that the perceived peer sugar-sweetened beverage consumption norm was much more positively associated with personal consumption than was the estimated actual sugar-sweetened beverage consumption norm per School Grade group. Misperceptions of peer sugar-sweetened beverage consumption norms were pervasive and associated with unhealthy sugar-sweetened beverage consumption behavior. These misperceptions may contribute to intake of excess calories, potentially contributing to adolescent obesity. Future research should assess the pervasiveness of sugar-sweetened beverage consumption misperceptions in other school populations as well as causes and consequences of these misperceptions. Health professionals may wish to consider how normative feedback interventions could potentially reduce consumption. Copyright © 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Semiclassical Dynamicswith Exponentially Small Error Estimates
NASA Astrophysics Data System (ADS)
Hagedorn, George A.; Joye, Alain
We construct approximate solutions to the time-dependent Schrödingerequation
Fraction of exhaled nitric oxide (FeNO ) norms in healthy North African children 5-16 years old.
Rouatbi, Sonia; Alqodwa, Ashraf; Ben Mdella, Samia; Ben Saad, Helmi
2013-10-01
(i) To identify factors that influence the FeNO values in healthy North African, Arab children aged 6-16 years; (ii) to test the applicability and reliability of the previously published FeNO norms; and (iii) if needed, to establish FeNO norms in this population, and to prospectively assess its reliability. This was a cross-sectional analytical study. A convenience sample of healthy Tunisian children, aged 6-16 years was recruited. First subjects have responded to two questionnaires, and then FeNO levels were measured by an online method with electrochemical analyzer (Medisoft, Sorinnes [Dinant], Belgium). Anthropometric and spirometric data were collected. Simple and a multiple linear regressions were determined. The 95% confidence interval (95% CI) and upper limit of normal (ULN) were defined. Two hundred eleven children (107 boys) were retained. Anthropometric data, gender, socioeconomic level, obesity or puberty status, and sports activity were not independent influencing variables. Total sample FeNO data appeared to be influenced only by maximum mid expiratory flow (l sec(-1) ; r(2) = 0.0236, P = 0.0516). For boys, only 1st second forced expiratory volume (l) explains a slight (r(2) = 0.0451) but significant FeNO variability (P = 0.0281). For girls, FeNO was not significantly correlated with any children determined data. For North African/Arab children, FeNO values were significantly lower than in other populations and the available published FeNO norms did not reliably predict FeNO in our population. The mean ± SD (95% CI ULN, minimum-maximum) of FeNO (ppb) for the total sample was 5.0 ± 2.9 (5.4, 1.0-17.0). For North African, Arab children of any age, any FeNO value greater than 17.0 ppb may be considered abnormal. Finally, in an additional group of children prospectively assessed, we found no child with a FeNO higher than 17.0 ppb. Our FeNO norms enrich the global repository of FeNO norms the pediatrician can use to choose the most appropriate norms based on children's location or ethnicity. © 2012 Wiley Periodicals, Inc.
Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker
2007-04-15
In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.
Annual Estimated Minimum School Program of Utah School Districts, 1984-85.
ERIC Educational Resources Information Center
Utah State Office of Education, Salt Lake City. School Finance and Business Section.
This bulletin presents both the statistical and financial data of the Estimated Annual State-Supported Minimum School Program for the 40 school districts of the State of Utah for the 1984-85 school year. It is published for the benefit of those interested in research into the minimum school programs of the various Utah school districts. A brief…
Gender, health, and initiation of breastfeeding.
Colodro-Conde, Lucía; Limiñana-Gras, Rosa M; Sánchez-López, M Pilar; Ordoñana, Juan R
2015-01-01
The aim of this study was to explore the associations of health, gender, and motherhood with the decisions about breastfeeding. The sample consisted of 265 pregnant women (mean age: 32.34, SD: 4.01 years) who were recruited in healthcare centers and hospitals in southeast Spain between 2010 and 2011. Mental health was measured by the 12-Item General Health Questionnaire and gender by the Conformity to Feminine Norms Inventory. Women in our sample showed a higher conformity to gender norms than women surveyed in the adaptation of the inventory to the Spanish population (t = 11.25, p < 0.001, effect estimate (Cohen's d) = 0.59). After adjustment for covariates, women who exclusively breastfed did not differ significantly in their conformity to gender norms from those who used partial breastfeeding or bottle feeding. Although good, our expectant mothers had worse mental health than the women aged 15-44 years in the Spanish National Health Survey (t = 2.96, p < 0.001, d = 0.26). Those who partially breastfed had significantly better mental health values. Gender norms were modulators in a model of factors related to initiation of breastfeeding. This study provides information about health and social construction of gender norms.
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
Do social norms play a role in explaining involvement in medical decision-making?
Brabers, Anne E M; van Dijk, Liset; Groenewegen, Peter P; de Jong, Judith D
2016-12-01
Patients' involvement in medical decision-making is crucial to provide good quality of care that is respectful of, and responsive to, patients' preferences, needs and values. Whether people want to be involved in medical decision-making is associated with individual patient characteristics, and health status. However, the observation of differences in whether people want to be involved does not in itself provide an explanation. Insight is necessary into mechanisms that explain people's involvement. This study aims to examine one mechanism, namely social norms. We make a distinction between subjective norms, that is doing what others think one ought to do, and descriptive norms, doing what others do. We focus on self-reported involvement in medical decision-making. A questionnaire was sent to members of the Dutch Health Care Consumer Panel in May 2015 (response 46%; N = 974). A regression model was used to estimate the relationship between socio-demographics, social norms and involvement in medical decision-making. In line with our hypotheses, we observed that the more conservative social norms are, the less people are involved in medical decision-making. The effects for both types of norms were comparable. This study indicates that social norms play a role as a mechanism to explain involvement in medical decision-making. Our study offers a first insight into the possibility that the decision to be involved in medical decision-making is not as individual as it at first seems; someone's social context also plays a role. Strategies aimed at emphasizing patient involvement have to address this social context. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Selvam, Sumithra; Thomas, Tinku; Shetty, Priya; Zhu, Jianjun; Raman, Vijaya; Khanna, Deepti; Mehra, Ruchika; Kurpad, Anura V; Srinivasan, Krishnamachari
2016-12-01
Assessment of developmental milestones based on locally developed norms is critical for accurate estimate of overall development of a child's cognitive, behavioral, social, and emotional development. A cross-sectional study was done to develop age specific norms for developmental milestones using Vineland Adaptive Behavior Scales (VABS-II) (Sparrow, Cicchetti, & Balla, 2005) for apparently healthy children from 2 to 5 years from urban Bangalore, India, and to examine its association with anthropometric measures. Mothers (or caregivers) of 412 children participated in the study. Age-specific norms using inferential norming method and adaptive levels for all domains and subdomains were derived. Low adaptive level, also called delayed developmental milestone, was observed in 2.3% of the children, specifically 2.7% in motor and daily living skills and 2.4% in communication skills. When these children were assessed on the existing U.S. norms, there was a significant overestimation of delayed development in socialization and motor skills, whereas delay in communication and daily living skills were underestimated (all p < .01). Multiple linear regression revealed that stunted and underweight children had significantly lower developmental scores for communication and motor skills compared with normal children (β coefficient ranges from 2.6-5.3; all p < .01). In the absence of Indian normative data for VABS-II in preschool children, the prevalence of developmental delay could either be under- or overestimated using Western norms. Thus, locally referenced norms are critical for reliable assessments of development in children. Stunted and underweight children are more likely to have poorer developmental scores compared with healthy children. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
Borderline features are associated with inaccurate trait self-estimations.
Morey, Leslie C
2014-01-01
Many treatments for Borderline Personality Disorder (BPD) are based upon the hypothesis that gross distortion in perceptions and attributions related to self and others represent a core mechanism for the enduring difficulties displayed by such patients. However, available experimental evidence of such distortions provides equivocal results, with some studies suggesting that BPD is related to inaccuracy in such perceptions and others indicative of enhanced accuracy in some judgments. The current study uses a novel methodology to explore whether individuals with BPD features are less accurate in estimating their levels of universal personality characteristics as compared to community norms. One hundred and four students received course instruction on the Five Factor Model of personality, and then were asked to estimate their levels of these five traits relative to community norms. They then completed the NEO-Five Factor Inventory and the Personality Assessment Inventory-Borderline Features scale (PAI-BOR). Accuracy of estimates was calculated by computing squared differences between self-estimated trait levels and norm-referenced standardized scores in the NEO-FFI. There was a moderately strong relationship between PAI-BOR score and inaccuracy of trait level estimates. In particular, high BOR individuals dramatically overestimated their levels of Agreeableness and Conscientiousness, estimating themselves to be slightly above average on each of these characteristics but actually scoring well below average on both. The accuracy of estimates of levels of Neuroticism were unrelated to BOR scores, despite the fact that BOR scores were highly correlated with Neuroticism. These findings support the hypothesis that a key feature of BPD involves marked perceptual distortions of various aspects of self in relationship to others. However, the results also indicate that this is not a global perceptual deficit, as high BOR scorers accurately estimated that their emotional responsiveness was well above average. However, such individuals appear to have limited insight into their relative disadvantages in the capacity for cooperative relationships, or their limited ability to approach life in a planful and non-impulsive manner. Such results suggest important targets for treatments addressing problems in self-other representations.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
About JEDI | Jobs and Economic Development Impact Models | NREL
About JEDI About JEDI The Jobs and Economic Development Impact (JEDI) models are user-friendly screening tools that estimate the economic impacts of constructing and operating power plants, fuel from industry norms), JEDI estimates the number of jobs and economic impacts to a local area that can
2012-12-01
requirements as part of an overall medical support concept In this document several potential CONOPS proposals are added as food for thought (see Chapter 4...safe flight minimums for manned flight; • En route or terminal environment (landing zone) is contaminated by an industrial spill or by a CBRN event...Further, the U.S. Food and Drug Administration (FDA) and other national/international medical regulatory authorities have requirements for portable
Social vision: sustained perceptual enhancement of affective facial cues in social anxiety
McTeague, Lisa M.; Shumen, Joshua R.; Wieser, Matthias J.; Lang, Peter J.; Keil, Andreas
2010-01-01
Heightened perception of facial cues is at the core of many theories of social behavior and its disorders. In the present study, we continuously measured electrocortical dynamics in human visual cortex, as evoked by happy, neutral, fearful, and angry faces. Thirty-seven participants endorsing high versus low generalized social anxiety (upper and lower tertiles of 2,104 screened undergraduates) viewed naturalistic faces flickering at 17.5 Hz to evoke steady-state visual evoked potentials (ssVEPs), recorded from 129 scalp electrodes. Electrophysiological data were evaluated in the time-frequency domain after linear source space projection using the minimum norm method. Source estimation indicated an early visual cortical origin of the face-evoked ssVEP, which showed sustained amplitude enhancement for emotional expressions specifically in individuals with pervasive social anxiety. Participants in the low symptom group showed no such sensitivity, and a correlational analysis across the entire sample revealed a strong relationship between self-reported interpersonal anxiety/avoidance and enhanced visual cortical response amplitude for emotional, versus neutral expressions. This pattern was maintained across the 3500 ms viewing epoch, suggesting that temporally sustained, heightened perceptual bias towards affective facial cues is associated with generalized social anxiety. PMID:20832490
Region-specific reduction of auditory sensory gating in older adults.
Cheng, Chia-Hsiung; Baillet, Sylvain; Lin, Yung-Yang
2015-12-01
Aging has been associated with declines in sensory-perceptual processes. Sensory gating (SG), or repetition suppression, refers to the attenuation of neural activity in response to a second stimulus and is considered to be an automatic process to inhibit redundant sensory inputs. It is controversial whether SG deficits, as tested with an auditory paired-stimulus protocol, accompany normal aging in humans. To reconcile the debates arising from event-related potential studies, we recorded auditory neuromagnetic reactivity in 20 young and 19 elderly adult men and determined the neural activation by using minimum-norm estimate (MNE) source modeling. SG of M100 was calculated by the ratio of the response to the second stimulus over that to the first stimulus. MNE results revealed that fronto-temporo-parietal networks were implicated in the M100 SG. Compared to the younger participants, the elderly showed selectively increased SG ratios in the anterior superior temporal gyrus, anterior middle temporal gyrus, temporal pole and orbitofrontal cortex, suggesting an insufficient age-related gating to repetitive auditory stimulation. These findings also highlight the loss of frontal inhibition of the auditory cortex in normal aging. Copyright © 2015 Elsevier Inc. All rights reserved.
Emotion processing in the visual brain: a MEG analysis.
Peyk, Peter; Schupp, Harald T; Elbert, Thomas; Junghöfer, Markus
2008-06-01
Recent functional magnetic resonance imaging (fMRI) and event-related brain potential (ERP) studies provide empirical support for the notion that emotional cues guide selective attention. Extending this line of research, whole head magneto-encephalogram (MEG) was measured while participants viewed in separate experimental blocks a continuous stream of either pleasant and neutral or unpleasant and neutral pictures, presented for 330 ms each. Event-related magnetic fields (ERF) were analyzed after intersubject sensor coregistration, complemented by minimum norm estimates (MNE) to explore neural generator sources. Both streams of analysis converge by demonstrating the selective emotion processing in an early (120-170 ms) and a late time interval (220-310 ms). ERF analysis revealed that the polarity of the emotion difference fields was reversed across early and late intervals suggesting distinct patterns of activation in the visual processing stream. Source analysis revealed the amplified processing of emotional pictures in visual processing areas with more pronounced occipito-parieto-temporal activation in the early time interval, and a stronger engagement of more anterior, temporal, regions in the later interval. Confirming previous ERP studies showing facilitated emotion processing, the present data suggest that MEG provides a complementary look at the spread of activation in the visual processing stream.
He, Wensi; Yan, Fangyou; Jia, Qingzhu; Xia, Shuqian; Wang, Qiang
2018-03-01
The hazardous potential of ionic liquids (ILs) is becoming an issue of great concern due to their important role in many industrial fields as green agents. The mathematical model for the toxicological effects of ILs is useful for the risk assessment and design of environmentally benign ILs. The objective of this work is to develop QSAR models to describe the minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) of ILs against Staphylococcus aureus (S. aureus). A total of 169 and 101 ILs with MICs and MBCs, respectively, are used to obtain multiple linear regression models based on matrix norm indexes. The norm indexes used in this work are proposed by our research group and they are first applied to estimate the antibacterial toxicity of these ILs against S. aureus. These two models precisely and reliably calculated the IL toxicities with a square of correlation coefficient (R 2 ) of 0.919 and a standard error of estimate (SE) of 0.341 (in log unit of mM) for pMIC, and an R 2 of 0.913 and SE of 0.282 for pMBC. Copyright © 2017 Elsevier Ltd. All rights reserved.
New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms
NASA Astrophysics Data System (ADS)
Hunek, Wojciech P.; Wach, Łukasz
2017-10-01
In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.
Su, G; Madsen, P; Lund, M S
2009-05-01
Crossbreeding is currently increasing in dairy cattle production. Several studies have shown an environment-dependent heterosis [i.e., an interaction between heterosis and environment (H x E)]. An H x E interaction is usually estimated from a few discrete environment levels. The present study proposes a reaction norm model to describe H x E interaction, which can deal with a large number of environment levels using few parameters. In the proposed model, total heterosis consists of an environment-independent part, which is described as a function of heterozygosity, and an environment-dependent part, which is described as a function of heterozygosity and environmental value (e.g., herd-year effect). A Bayesian approach is developed to estimate the environmental covariates, the regression coefficients of the reaction norm, and other parameters of the model simultaneously in both linear and nonlinear reaction norms. In the nonlinear reaction norm model, the H x E is approximated using linear splines. The approach was tested using simulated data, which were generated using an animal model with a reaction norm for heterosis. The simulation study includes 4 scenarios (the combinations of moderate vs. low heritability and moderate vs. low herd-year variation) of H x E interaction in a nonlinear form. In all scenarios, the proposed model predicted total heterosis very well. The correlation between true heterosis and predicted heterosis was 0.98 in the scenarios with low herd-year variation and 0.99 in the scenarios with moderate herd-year variation. This suggests that the proposed model and method could be a good approach to analyze H x E interactions and predict breeding values in situations in which heterosis changes gradually and continuously over an environmental gradient. On the other hand, it was found that a model ignoring H x E interaction did not significantly harm the prediction of breeding value under the simulated scenarios in which the variance for environment-dependent heterosis effects was small (as it generally is), and sires were randomly used over production environments.
Back-Projection Cortical Potential Imaging: Theory and Results.
Haor, Dror; Shavit, Reuven; Shapiro, Moshe; Geva, Amir B
2017-07-01
Electroencephalography (EEG) is the single brain monitoring technique that is non-invasive, portable, passive, exhibits high-temporal resolution, and gives a directmeasurement of the scalp electrical potential. Amajor disadvantage of the EEG is its low-spatial resolution, which is the result of the low-conductive skull that "smears" the currents coming from within the brain. Recording brain activity with both high temporal and spatial resolution is crucial for the localization of confined brain activations and the study of brainmechanismfunctionality, whichis then followed by diagnosis of brain-related diseases. In this paper, a new cortical potential imaging (CPI) method is presented. The new method gives an estimation of the electrical activity on the cortex surface and thus removes the "smearing effect" caused by the skull. The scalp potentials are back-projected CPI (BP-CPI) onto the cortex surface by building a well-posed problem to the Laplace equation that is solved by means of the finite elements method on a realistic head model. A unique solution to the CPI problem is obtained by introducing a cortical normal current estimation technique. The technique is based on the same mechanism used in the well-known surface Laplacian calculation, followed by a scalp-cortex back-projection routine. The BP-CPI passed four stages of validation, including validation on spherical and realistic head models, probabilistic analysis (Monte Carlo simulation), and noise sensitivity tests. In addition, the BP-CPI was compared with the minimum norm estimate CPI approach and found superior for multi-source cortical potential distributions with very good estimation results (CC >0.97) on a realistic head model in the regions of interest, for two representative cases. The BP-CPI can be easily incorporated in different monitoring tools and help researchers by maintaining an accurate estimation for the cortical potential of ongoing or event-related potentials in order to have better neurological inferences from the EEG.
Grube, Joel W.; Paschall, Mallie J.
2009-01-01
Strategies to enforce underage drinking laws are aimed at reducing youth access to alcohol from commercial and social sources and deterring its possession and use. However, little is known about the processes through which enforcement strategies may affect underage drinking. The purpose of the current study is to present and test a conceptual model that specifies possible direct and indirect relationships among adolescents’ perception of community alcohol norms, enforcement of underage drinking laws, personal beliefs (perceived parental disapproval of alcohol use, perceived alcohol availability, perceived drinking by peers, perceived harm and personal disapproval of alcohol use), and their past-30-day alcohol use. This study used data from 17,830 middle and high school students who participated in the 2007 Oregon Health Teens Survey. Structural equations modeling indicated that perceived community disapproval of adolescents’ alcohol use was directly and positively related to perceived local police enforcement of underage drinking laws. In addition, adolescents’ personal beliefs appeared to mediate the relationship between perceived enforcement of underage drinking laws and past-30-day alcohol use. Enforcement of underage drinking laws appeared to partially mediate the relationship between perceived community disapproval and personal beliefs related to alcohol use. Results of this study suggests that environmental prevention efforts to reduce underage drinking should target adults’ attitudes and community norms about underage drinking as well as the beliefs of youth themselves. PMID:20135210
Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.
Gong, Changcheng; Cai, Yufang; Zeng, Li
2018-01-01
For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.
Carroll, Suzanne J; Niyonsenga, Theo; Coffee, Neil T; Taylor, Anne W; Daniel, Mark
2018-05-18
Descriptive norms (what other people do) relate to individual-level dietary behaviour and health outcome including overweight and obesity. Descriptive norms vary across residential areas but the impact of spatial variation in norms on individual-level diet and health is poorly understood. This study assessed spatial associations between local descriptive norms for overweight/obesity and insufficient fruit intake (spatially-specific local prevalence), and individual-level dietary intakes (fruit, vegetable and sugary drinks) and 10-year change in body mass index (BMI) and glycosylated haemoglobin (HbA 1c ). HbA 1c and BMI were clinically measured three times over 10 years for a population-based adult cohort (n = 4056) in Adelaide, South Australia. Local descriptive norms for both overweight/obesity and insufficient fruit intake specific to each cohort participant were calculated as the prevalence of these factors, constructed from geocoded population surveillance data aggregated for 1600 m road-network buffers centred on cohort participants' residential addresses. Latent growth models estimated the effect of local descriptive norms on dietary behaviours and change in HbA 1c and BMI, accounting for spatial clustering and covariates (individual-level age, sex, smoking status, employment and education, and area-level median household income). Local descriptive overweight/obesity norms were associated with individual-level fruit intake (inversely) and sugary drink consumption (positively), and worsening HbA 1c and BMI. Spatially-specific local norms for insufficient fruit intake were associated with individual-level fruit intake (inversely) and sugary drink consumption (positively) and worsening HbA 1c but not change in BMI. Individual-level fruit and vegetable intakes were not associated with change in HbA 1c or BMI. Sugary drink consumption was also not associated with change in HbA 1c but rather with increasing BMI. Adverse local descriptive norms for overweight/obesity and insufficient fruit intake are associated with unhealthful dietary intakes and worsening HbA 1c and BMI. As such, spatial variation in lifestyle-related norms is an important consideration relevant to the design of population health interventions. Adverse local norms influence health behaviours and outcomes and stand to inhibit the effectiveness of traditional intervention efforts not spatially tailored to local population characteristics. Spatially targeted social de-normalisation strategies for regions with high levels of unhealthful norms may hold promise in concert with individual, environmental and policy intervention approaches.
Tanaka, Naoaki; Papadelis, Christos; Tamilia, Eleonora; Madsen, Joseph R; Pearl, Phillip L; Stufflebeam, Steven M
2018-04-27
This study evaluates magnetoencephalographic (MEG) spike population as compared with intracranial electroencephalographic (IEEG) spikes using a quantitative method based on distributed source analysis. We retrospectively studied eight patients with medically intractable epilepsy who had an MEG and subsequent IEEG monitoring. Fifty MEG spikes were analyzed in each patient using minimum norm estimate. For individual spikes, each vertex in the source space was considered activated when its source amplitude at the peak latency was higher than a threshold, which was set at 50% of the maximum amplitude over all vertices. We mapped the total count of activation at each vertex. We also analyzed 50 IEEG spikes in the same manner over the intracranial electrodes and created the activation count map. The location of the electrodes was obtained in the MEG source space by coregistering postimplantation computed tomography to MRI. We estimated the MEG- and IEEG-active regions associated with the spike populations using the vertices/electrodes with a count over 25. The activation count maps of MEG spikes demonstrated the localization associated with the spike population by variable count values at each vertex. The MEG-active region overlapped with 65 to 85% of the IEEG-active region in our patient group. Mapping the MEG spike population is valid for demonstrating the trend of spikes clustering in patients with epilepsy. In addition, comparison of MEG and IEEG spikes quantitatively may be informative for understanding their relationship.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
NanoStringNormCNV: pre-processing of NanoString CNV data.
Sendorek, Dorota H; Lalonde, Emilie; Yao, Cindy Q; Sabelnykova, Veronica Y; Bristow, Robert G; Boutros, Paul C
2018-03-15
The NanoString System is a well-established technology for measuring RNA and DNA abundance. Although it can estimate copy number variation, relatively few tools support analysis of these data. To address this gap, we created NanoStringNormCNV, an R package for pre-processing and copy number variant calling from NanoString data. This package implements algorithms for pre-processing, quality-control, normalization and copy number variation detection. A series of reporting and data visualization methods support exploratory analyses. To demonstrate its utility, we apply it to a new dataset of 96 genes profiled on 41 prostate tumour and 24 matched normal samples. NanoStringNormCNV is implemented in R and is freely available at http://labs.oicr.on.ca/boutros-lab/software/nanostringnormcnv. paul.boutros@oicr.on.ca. Supplementary data are available at Bioinformatics online.
Probability theory, not the very guide of life.
Juslin, Peter; Nilsson, Håkan; Winman, Anders
2009-10-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.
Carroll, Suzanne J; Niyonsenga, Theo; Coffee, Neil T; Taylor, Anne W; Daniel, Mark
2017-08-23
Associations between local-area residential features and glycosylated hemoglobin (HbA 1c ) may be mediated by individual-level health behaviors. Such indirect effects have rarely been tested. This study assessed whether individual-level self-reported physical activity mediated the influence of local-area descriptive norms and objectively expressed walkability on 10-year change in HbA 1c . HbA 1c was assessed three times for adults in a 10-year population-based biomedical cohort ( n = 4056). Local-area norms specific to each participant were calculated, aggregating responses from a separate statewide surveillance survey for 1600 m road-network buffers centered on participant addresses (local prevalence of overweight/obesity (body mass index ≥25 kg/m²) and physical inactivity (<150 min/week)). Separate latent growth models estimated direct and indirect (through physical activity) effects of local-area exposures on change in HbA 1c , accounting for spatial clustering and covariates (individual-level age, sex, smoking status, marital status, employment and education, and area-level median household income). HbA 1c worsened over time. Local-area norms directly and indirectly predicted worsening HbA 1c trajectories. Walkability was directly and indirectly protective of worsening HbA 1c . Local-area descriptive norms and walkability influence cardiometabolic risk trajectory through individual-level physical activity. Efforts to reduce population cardiometabolic risk should consider the extent of local-area unhealthful behavioral norms and walkability in tailoring strategies to improve physical activity.
Daniel, Mark
2017-01-01
Associations between local-area residential features and glycosylated hemoglobin (HbA1c) may be mediated by individual-level health behaviors. Such indirect effects have rarely been tested. This study assessed whether individual-level self-reported physical activity mediated the influence of local-area descriptive norms and objectively expressed walkability on 10-year change in HbA1c. HbA1c was assessed three times for adults in a 10-year population-based biomedical cohort (n = 4056). Local-area norms specific to each participant were calculated, aggregating responses from a separate statewide surveillance survey for 1600 m road-network buffers centered on participant addresses (local prevalence of overweight/obesity (body mass index ≥25 kg/m2) and physical inactivity (<150 min/week)). Separate latent growth models estimated direct and indirect (through physical activity) effects of local-area exposures on change in HbA1c, accounting for spatial clustering and covariates (individual-level age, sex, smoking status, marital status, employment and education, and area-level median household income). HbA1c worsened over time. Local-area norms directly and indirectly predicted worsening HbA1c trajectories. Walkability was directly and indirectly protective of worsening HbA1c. Local-area descriptive norms and walkability influence cardiometabolic risk trajectory through individual-level physical activity. Efforts to reduce population cardiometabolic risk should consider the extent of local-area unhealthful behavioral norms and walkability in tailoring strategies to improve physical activity. PMID:28832552
On the decay of solutions to the 2D Neumann exterior problem for the wave equation
NASA Astrophysics Data System (ADS)
Secchi, Paolo; Shibata, Yoshihiro
We consider the exterior problem in the plane for the wave equation with a Neumann boundary condition and study the asymptotic behavior of the solution for large times. For possible application we are interested to show a decay estimate which does not involve weighted norms of the initial data. In the paper we prove such an estimate, by a combination of the estimate of the local energy decay and decay estimates for the free space solution.
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
López-Mosquera, Natalia; García, Teresa; Barrena, Ramo
2014-03-15
This paper relates the concept of moral obligation and the components of the Theory of Planned Behavior to determine their influence on the willingness to pay of visitors for park conservation. The sample consists of 190 visitors to an urban Spanish park. The mean willingness to pay estimated was 12.67€ per year. The results also indicated that moral norm was the major factor in predicting behavioral intention, followed by attitudes. The new relations established between the components of the Theory of Planned Behavior show that social norms significantly determine the attitudes, moral norms and perceived behavioral control of individuals. The proportion of explained variance shows that the inclusion of moral norms improves the explanatory power of the original model of the Theory of Planned Behavior (32-40%). Community-based social marketing and local campaigns are the main strategies that should be followed by land managers with the objective of promoting responsible, pro-environmental attitudes as well as a greater willingness to pay for this type of goods. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.
Payne, Krista A; Rofail, Diana; Baladi, Jean-François; Viala, Muriel; Abetz, Linda; Desrosiers, Marie-Pierre; Lordan, Noreen; Ishak, Khajak; Proskorovsky, Irina
2008-08-01
This study of UK patients examines clinical, health-related quality of life (HRQOL) and economic outcomes associated with iron chelation therapy (ICT). Desferrioxamine (DFO) (Desferal; Novartis, Switzerland) and Deferiprone (Ferriprox; Apotex, Canada) are ICTs used to treat iron overload. DFO requires 8-to 12-hour infusions a minimum of five times per week. Deferiprone is administered in an oral daily regimen. Although pharmacologically efficacious, clinical effectiveness of ICT within the real-world setting is yet to be fully elucidated. A naturalistic cohort study of 60 patients (beta-thalassaemia, n=40; sickle cell disease, n=14; myelodysplastic syndromes, n=6; 63% female) receiving ICT in four UK treatment centres was conducted. Serum ferritin level data were abstracted from medical charts. Compliance, HRQOL, satisfaction and resource utilisation data were collected from interviews. Maximum ICT costs were estimated using the resource utilisation data associated with DFO. Mean serum ferritin levels, generally, remained elevated despite ICT. Compliance was suboptimal and HRQOL scores were lower than population norms. The total estimated mean weighted annual per-patient cost of DFO treatment was approximately pound19,000. DFO-related equipment, DFO drug, and home healthcare were estimated to account for 43%, 19% and 24% of costs, respectively. Other more minor components of total annual costs were for in-patient infusions, ICT home delivery services and monitoring costs. Generally, patients are not achieving target serum ferritin thresholds despite chronic treatment for iron overload. ICT appears to negatively impact HRQOL; compliance with ICT is poor; and, in the case of DFO, treatment costs well exceed the cost of DFO alone. These results suggest that current ICT in the real-world setting is suboptimal with respect to various clinical, HRQOL and economic outcomes.
Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-09-30
To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.
Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-01-01
Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743
Saddeek, Ali Mohamed
2017-01-01
Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).
NASA Technical Reports Server (NTRS)
Bogdan, V. M.; Bond, V. B.
1980-01-01
The deviation of the solution of the differential equation y' = f(t, y), y(O) = y sub O from the solution of the perturbed system z' = f(t, z) + g(t, z), z(O) = z sub O was investigated for the case where f and g are continuous functions on I x R sup n into R sup n, where I = (o, a) or I = (o, infinity). These functions are assumed to satisfy the Lipschitz condition in the variable z. The space Lip(I) of all such functions with suitable norms forms a Banach space. By introducing a suitable norm in the space of continuous functions C(I), introducing the problem can be reduced to an equivalent problem in terminology of operators in such spaces. A theorem on existence and uniqueness of the solution is presented by means of Banach space technique. Norm estimates on the rate of growth of such solutions are found. As a consequence, estimates of deviation of a solution due to perturbation are obtained. Continuity of the solution on the initial data and on the perturbation is established. A nonlinear perturbation of the harmonic oscillator is considered a perturbation of equations of the restricted three body problem linearized at libration point.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
Simons-Morton, Bruce; Haynie, Denise; Bible, Joe; Liu, Danping
2018-02-05
Descriptive norms are commonly associated with participant drinking. However, study participants may incorrectly perceive that their peers drink about the same amount as they do, which would bias estimates of drinking homogeneity. This research examined the magnitude of associations between emerging adults' reports of their own drinking and peer drinking measured the previous year by measures of (1) participants' perceptions of friends' drinking; and (2) actual drinking reported by nominated peers. The data are from annual surveys conducted in 2014 and 2015, Waves 4 and 5 (the first 2 years after high school) of 7 annual assessments as part of the NEXT Generation Health Study (n = 323). Associations of participant alcohol use with perceived friend use (five closest, closest male, and closest female friends), and with actual peer use. Logistic regression analyses estimated the magnitudes of prospective associations between each measure of peer drinking at W4 and participant drinking at W5.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
Prevalence of autosomal dominant polycystic kidney disease in the European Union.
Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S
2017-08-01
Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Reconstructing the duty of water: a study of emergent norms in socio-hydrology
NASA Astrophysics Data System (ADS)
Wescoat, J. L., Jr.
2013-06-01
This paper assesses changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, a line of research useful for anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late-18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for water rights appropriation (e.g., only 40 to 80 acres per cfs). The final section shows that while the duty of water concept has now been eclipsed by other measures and standards of water efficiency, it may have continuing relevance for anticipating if not predicting emerging social values with respect to water.
Minimum viable populations: Is there a 'magic number' for conservation practitioners?
Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens
2011-01-01
Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...
Minimum Wage Effects on Educational Enrollments in New Zealand
ERIC Educational Resources Information Center
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
ERIC Educational Resources Information Center
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
Does the Minimum Wage Affect Welfare Caseloads?
ERIC Educational Resources Information Center
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Blind compressive sensing dynamic MRI
Lingala, Sajan Goud; Jacob, Mathews
2013-01-01
We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. Tinsley
1993-01-01
A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.
Stretchy binary classification.
Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo
2018-01-01
In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lei, Shaw-Min; Yao, Kung
1990-01-01
A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.
Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria
2007-01-01
Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Simos, Panagiotis G.; Rezaie, Roozbeh; Papanicolaou, Andrew C.; Fletcher, Jack M.
2014-01-01
The study examined whether individual differences in performance and verbal IQ affect the profiles of reading-related regional brain activation in 127 students experiencing reading difficulties and typical readers. Using magnetoencephalography in a pseudoword read-aloud task, we compared brain activation profiles of students experiencing word-level reading difficulties who did (n = 29) or did not (n = 36) meet the IQ-reading achievement discrepancy criterion. Typical readers assigned to a lower-IQ (n = 18) or a higher IQ (n = 44) subgroup served as controls. Minimum norm estimates of regional cortical activity revealed that the degree of hypoactivation in the left superior temporal and supramarginal gyri in both RD subgroups was not affected by IQ. Moreover, IQ did not moderate the positive association between degree of activation in the left fusiform gyrus and phonological decoding ability. We did find, however, that the hypoactivation of the left pars opercularis in RD was restricted to lower-IQ participants. In accordance with previous morphometric and fMRI studies, degree of activity in inferior frontal, and inferior parietal regions correlated with IQ across reading ability subgroups. Results are consistent with current views questioning the relevance of IQ-discrepancy criteria in the diagnosis of dyslexia. PMID:24409136
Miozzo, Michele; Pulvermüller, Friedemann; Hauk, Olaf
2015-01-01
The time course of brain activation during word production has become an area of increasingly intense investigation in cognitive neuroscience. The predominant view has been that semantic and phonological processes are activated sequentially, at about 150 and 200–400 ms after picture onset. Although evidence from prior studies has been interpreted as supporting this view, these studies were arguably not ideally suited to detect early brain activation of semantic and phonological processes. We here used a multiple linear regression approach to magnetoencephalography (MEG) analysis of picture naming in order to investigate early effects of variables specifically related to visual, semantic, and phonological processing. This was combined with distributed minimum-norm source estimation and region-of-interest analysis. Brain activation associated with visual image complexity appeared in occipital cortex at about 100 ms after picture presentation onset. At about 150 ms, semantic variables became physiologically manifest in left frontotemporal regions. In the same latency range, we found an effect of phonological variables in the left middle temporal gyrus. Our results demonstrate that multiple linear regression analysis is sensitive to early effects of multiple psycholinguistic variables in picture naming. Crucially, our results suggest that access to phonological information might begin in parallel with semantic processing around 150 ms after picture onset. PMID:25005037
Simos, Panagiotis G; Rezaie, Roozbeh; Papanicolaou, Andrew C; Fletcher, Jack M
2014-01-01
The study examined whether individual differences in performance and verbal IQ affect the profiles of reading-related regional brain activation in 127 students experiencing reading difficulties and typical readers. Using magnetoencephalography in a pseudoword read-aloud task, we compared brain activation profiles of students experiencing word-level reading difficulties who did (n = 29) or did not (n = 36) meet the IQ-reading achievement discrepancy criterion. Typical readers assigned to a lower-IQ (n = 18) or a higher IQ (n = 44) subgroup served as controls. Minimum norm estimates of regional cortical activity revealed that the degree of hypoactivation in the left superior temporal and supramarginal gyri in both RD subgroups was not affected by IQ. Moreover, IQ did not moderate the positive association between degree of activation in the left fusiform gyrus and phonological decoding ability. We did find, however, that the hypoactivation of the left pars opercularis in RD was restricted to lower-IQ participants. In accordance with previous morphometric and fMRI studies, degree of activity in inferior frontal, and inferior parietal regions correlated with IQ across reading ability subgroups. Results are consistent with current views questioning the relevance of IQ-discrepancy criteria in the diagnosis of dyslexia.
NASA Astrophysics Data System (ADS)
Vazquez, Gerardo; Magana, Fernando; Salas-Torres, Osiris
We explore the structural interactions between graphene and transition metals such as palladium (Pd) and titanium (Ti) and the possibility of inducing superconductivity in a graphene sheet in two cases, one by doping its surface with palladium atoms sit on the center of the hexagons of the graphene layer and other by covering the graphene layer with two layers of titanium metal atoms. The results here were obtained from first-principles density functional theory in the local density approximation. The Quantum-Espresso package was used with norm conserving pseudopotentials. All of the structures considered were relaxed to their minimum energy configuration. Phonon frequencies were calculated using the linear-response technique on several phonon wave-vector mesh. The electron-phonon coupling parameter was calculated with several electron momentum k-mesh. The superconducting critical temperature was estimated using the Allen-Dynes formula with μ* = 0.1 - 0.15. We note that palladium and titanium are good candidate materials to show a metal-to-superconductor transition. We thank Dirección General de Asuntos del Personal Académico de la Universidad Nacional Autónoma de México, partial financial support by Grant IN-106514 and we also thank Miztli Super-Computing center the technical assistance.
Robust Means and Covariance Matrices by the Minimum Volume Ellipsoid (MVE).
ERIC Educational Resources Information Center
Blankmeyer, Eric
P. Rousseeuw and A. Leroy (1987) proposed a very robust alternative to classical estimates of mean vectors and covariance matrices, the Minimum Volume Ellipsoid (MVE). This paper describes the MVE technique and presents a BASIC program to implement it. The MVE is a "high breakdown" estimator, one that can cope with samples in which as…
Graph properties of synchronized cortical networks during visual working memory maintenance.
Palva, Satu; Monto, Simo; Palva, J Matias
2010-02-15
Oscillatory synchronization facilitates communication in neuronal networks and is intimately associated with human cognition. Neuronal activity in the human brain can be non-invasively imaged with magneto- (MEG) and electroencephalography (EEG), but the large-scale structure of synchronized cortical networks supporting cognitive processing has remained uncharacterized. We combined simultaneous MEG and EEG (MEEG) recordings with minimum-norm-estimate-based inverse modeling to investigate the structure of oscillatory phase synchronized networks that were active during visual working memory (VWM) maintenance. Inter-areal phase-synchrony was quantified as a function of time and frequency by single-trial phase-difference estimates of cortical patches covering the entire cortical surfaces. The resulting networks were characterized with a number of network metrics that were then compared between delta/theta- (3-6 Hz), alpha- (7-13 Hz), beta- (16-25 Hz), and gamma- (30-80 Hz) frequency bands. We found several salient differences between frequency bands. Alpha- and beta-band networks were more clustered and small-world like but had smaller global efficiency than the networks in the delta/theta and gamma bands. Alpha- and beta-band networks also had truncated-power-law degree distributions and high k-core numbers. The data converge on showing that during the VWM-retention period, human cortical alpha- and beta-band networks have a memory-load dependent, scale-free small-world structure with densely connected core-like structures. These data further show that synchronized dynamic networks underlying a specific cognitive state can exhibit distinct frequency-dependent network structures that could support distinct functional roles. Copyright 2009 Elsevier Inc. All rights reserved.
IN SITU ESTIMATES OF FOREST LAI FOR MODIS DATA VALIDATION
Satellite remote sensor data are commonly used to assess ecosystem conditions through synoptic monitoring of terrestrial vegetation extent, biomass, and seasonal dynamics. Two commonly used vegetation indices that can be derived from various remote sensor systems include the Norm...
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology
Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world’s earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated. PMID:27579865
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-01-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology.
Ross, June; Westaway, Kira; Travers, Meg; Morwood, Michael J; Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world's earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated.
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-04-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
An estimate of the number of tropical tree species.
Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M
2015-06-16
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Åstrøm, Anne N; Lie, Stein Atle; Gülcan, Ferda
2018-05-31
Understanding factors that affect dental attendance behavior helps in constructing effective oral health campaigns. A socio-cognitive model that adequately explains variance in regular dental attendance has yet to be validated among younger adults in Norway. Focusing a representative sample of younger Norwegian adults, this cross-sectional study provided an empirical test of the Theory of Planned Behavior (TPB) augmented with descriptive norm and action planning and estimated direct and indirect effects of attitudes, subjective norms, descriptive norms, perceived behavioral control and action planning on intended and self-reported regular dental attendance. Self-administered questionnaires provided by 2551, 25-35 year olds, randomly selected from the Norwegian national population registry were used to assess socio-demographic factors, dental attendance as well as the constructs of the augmented TPB model (attitudes, subjective norms, descriptive norms, intention, action planning). A two-stage process of structural equation modelling (SEM) was used to test the augmented TPB model. Confirmatory factor analysis, CFA, confirmed the proposed correlated 6-factor measurement model after re-specification. SEM revealed that attitudes, perceived behavioral control, subjective norms and descriptive norms explained intention. The corresponding standardized regression coefficients were respectively (β = 0.70), (β =0.18), (β = - 0.17) and (β =0.11) (p < 0.001). Intention (β =0.46) predicted action planning and action planning (β =0.19) predicted dental attendance behavior (p < 0.001). The model revealed indirect effects of intention and perceived behavioral control on behavior through action planning and through intention and action planning, respectively. The final model explained 64 and 41% of the total variance in intention and dental attendance behavior. The findings support the utility of the TPB, the expanded normative component and action planning in predicting younger adults' intended- and self-reported dental attendance. Interventions targeting young adults' dental attendance might usefully focus on positive consequences following this behavior accompanied with modeling and group performance.
Riou França, Lionel; Dautzenberg, Bertrand; Falissard, Bruno; Reynaud, Michel
2009-01-01
Background Knowledge of the correlates of smoking is a first step to successful prevention interventions. The social norms theory hypothesises that students' smoking behaviour is linked to their perception of norms for use of tobacco. This study was designed to test the theory that smoking is associated with perceived norms, controlling for other correlates of smoking. Methods In a pencil-and-paper questionnaire, 721 second-year students in sociology, medicine, foreign language or nursing studies estimated the number of cigarettes usually smoked in a month. 31 additional covariates were included as potential predictors of tobacco use. Multiple imputation was used to deal with missing values among covariates. The strength of the association of each variable with tobacco use was quantified by the inclusion frequencies of the variable in 1000 bootstrap sample backward selections. Being a smoker and the number of cigarettes smoked by smokers were modelled separately. Results We retain 8 variables to predict the risk of smoking and 6 to predict the quantities smoked by smokers. The risk of being a smoker is increased by cannabis use, binge drinking, being unsupportive of smoke-free universities, perceived friends' approval of regular smoking, positive perceptions about tobacco, a high perceived prevalence of smoking among friends, reporting not being disturbed by people smoking in the university, and being female. The quantity of cigarettes smoked by smokers is greater for smokers reporting never being disturbed by smoke in the university, unsupportive of smoke-free universities, perceiving that their friends approve of regular smoking, having more negative beliefs about the tobacco industry, being sociology students and being among the older students. Conclusion Other substance use, injunctive norms (friends' approval) and descriptive norms (friends' smoking prevalence) are associated with tobacco use. University-based prevention campaigns should take multiple substance use into account and focus on the norms most likely to have an impact on student smoking. PMID:19341453
Dempsey, Robert C; McAlaney, John; Helmer, Stefanie M; Pischke, Claudia R; Akvardar, Yildiz; Bewick, Bridgette M; Fawkner, Helen J; Guillen-Grima, Francisco; Stock, Christiane; Vriesacker, Bart; Van Hal, Guido; Salonna, Ferdinand; Kalina, Ondrej; Orosova, Olga; Mikolajczyk, Rafael T
2016-09-01
Perceptions of peer behavior and attitudes exert considerable social pressure on young adults to use substances. This study investigated whether European students perceive their peers' cannabis use and approval of cannabis use to be higher than their own personal behaviors and attitudes, and whether estimations of peer use and attitudes are associated with personal use and attitudes. University students (n = 4,131) from Belgium, Denmark, Germany, the Slovak Republic, Spain, Turkey, and the United Kingdom completed an online survey as part of the Social Norms Intervention for Polysubstance usE in students (SNIPE) Project, a feasibility study of a web-based normative feedback intervention for substance use. The survey assessed students' (a) personal substance use and attitudes and (b) perceptions of their peers' cannabis use (descriptive norms) and attitudes (injunctive norms). Although most respondents (92%) did not personally use cannabis in the past 2 months, the majority of students thought that the majority of their peers were using cannabis and that their peers had more permissive attitudes toward cannabis than they did. When we controlled for students' age, sex, study year, and religious beliefs, perceived peer descriptive norms were associated with personal cannabis use (odds ratio [OR] = 1.42; 95% CI [1.22, 1.64]) and perceived injunctive norms were associated with personal attitudes toward cannabis use (OR = 1.46; 95% CI [1.09, 1.94]). European students appear to possess similar discrepancies between personal and perceived peer norms for cannabis use and attitudes as found in North American students. Interventions that address such discrepancies may be effective in reducing cannabis use.
Clinical value of the VMI supplemental tests: a modified replication study.
Avi-Itzhak, Tamara; Obler, Doris Richard
2008-10-01
To carry out a modified replication of the study performed by Kulp and Sortor evaluating the clinical value of the information provided by Beery's visual-motor supplemental tests of Visual Perception (VP) and Motor Coordination (MC) in normally developed children. The objectives were to (a) estimate the correlations among the three tests scores; (b) assess the predictive power of the VP and MC scores in explaining the variance in Visual-Motor Integration (VMI) scores; and (c) examine whether poor performance on the VMI is related to poor performance on VP or MC. METHODS.: A convenience sample of 71 children ages 4 and 5 years (M = 4.62 +/- 0.43) participated in the study. The supplemental tests significantly (F = 9.59; dF = 2; p < or = 0. 001) explained 22% of the variance in VMI performance. Only VP was significantly related to VMI (beta = 0.39; T = 3.49) accounting for the total amount of explained variance. Using the study population norms, 11 children (16% of total sample) did poorly on the VMI; of those 11, 73% did poorly on the VP, and none did poorly on the MC. None of these 11 did poorly on both the VP and MC. Nine percent of total sample who did poorly on the VP performed within the norm on the VMI. Thirteen percent who performed poorly on the MC performed within the norm on the VMI. Using the VMI published norms, 14 children (20% of total sample) who did poorly on the VP performed within the norm on the VMI. Forty-eight percent who did poorly on MC performed within the norm on the VMI. Findings supported Kulp and Sortor's conclusions that each area should be individually evaluated during visual-perceptual assessment of children regardless of performance on the VMI.
Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source
NASA Astrophysics Data System (ADS)
Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen
2018-05-01
Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.
Tensor completion for estimating missing values in visual data.
Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping
2013-01-01
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.
Design of optimally normal minimum gain controllers by continuation method
NASA Technical Reports Server (NTRS)
Lim, K. B.; Juang, J.-N.; Kim, Z. C.
1989-01-01
A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.
2008-10-01
et planification en ressources humaines militaires a aboli la norme de taille minimum des Forces Canadiennes. On a conclu que "les...015; Defence R&D Canada – Toronto; October 2008. Introduction ou contexte : En février 2002, le directeur général – politiques et planification en...arming cables. ....................................................... 6 Figure 4 Reach of full throttle (left) and fire bottle T -handles (right
Stable Estimation of a Covariance Matrix Guided by Nuclear Norm Penalties
Chi, Eric C.; Lange, Kenneth
2014-01-01
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable target and leads to a MAP estimator that is consistent and asymptotically efficient. Thus, the MAP estimator gracefully transitions towards the sample covariance matrix as the number of samples grows relative to the number of covariates. The utility of the MAP estimator is demonstrated in two standard applications – discriminant analysis and EM clustering – in this sampling regime. PMID:25143662
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Wambeam, Rodney A; Canen, Eric L; Linkenbach, Jeff; Otto, Jay
2014-02-01
Effective community prevention of substance abuse involves the integration of policies and programs to address many different risk and protective factors across the social ecology. This study sought to examine whether youth perceptions of peer substance use norms were operating as a risk factor at the same level as other known risk factors in a statewide community prevention effort. Several different analytical techniques were employed to examine the self-reported data from a sample of over 8,000 students in grades 6, 8, 10, and 12 from across Wyoming using a survey based on a risk and protective factor model. The findings of this study revealed that youth misperception of peer substance use norms operate at a level of significance similar to other known risk factors, and these misperceptions are a risk factor that should be measured in order to estimate its relationship with substance use. The measurement of this risk factor has important strategic implications for community prevention.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
Estimating affective word covariates using word association data.
Van Rensbergen, Bram; De Deyne, Simon; Storms, Gert
2016-12-01
Word ratings on affective dimensions are an important tool in psycholinguistic research. Traditionally, they are obtained by asking participants to rate words on each dimension, a time-consuming procedure. As such, there has been some interest in computationally generating norms, by extrapolating words' affective ratings using their semantic similarity to words for which these values are already known. So far, most attempts have derived similarity from word co-occurrence in text corpora. In the current paper, we obtain similarity from word association data. We use these similarity ratings to predict the valence, arousal, and dominance of 14,000 Dutch words with the help of two extrapolation methods: Orientation towards Paradigm Words and k-Nearest Neighbors. The resulting estimates show very high correlations with human ratings when using Orientation towards Paradigm Words, and even higher correlations when using k-Nearest Neighbors. We discuss possible theoretical accounts of our results and compare our findings with previous attempts at computationally generating affective norms.
Quantum Ergodicity and L p Norms of Restrictions of Eigenfunctions
NASA Astrophysics Data System (ADS)
Hezari, Hamid
2018-02-01
We prove an analogue of Sogge's local L p estimates for L p norms of restrictions of eigenfunctions to submanifolds, and use it to show that for quantum ergodic eigenfunctions one can get improvements of the results of Burq-Gérard-Tzvetkov, Hu, and Chen-Sogge. The improvements are logarithmic on negatively curved manifolds (without boundary) and by o(1) for manifolds (with or without boundary) with ergodic geodesic flows. In the case of ergodic billiards with piecewise smooth boundary, we get o(1) improvements on L^∞ estimates of Cauchy data away from a shrinking neighborhood of the corners, and as a result using the methods of Ghosh et al., Jung and Zelditch, Jung and Zelditch, we get that the number of nodal domains of 2-dimensional ergodic billiards tends to infinity as λ \\to ∞. These results work only for a full density subsequence of any given orthonormal basis of eigenfunctions. We also present an extension of the L p estimates of Burq-Gérard-Tzvetkov, Hu, Chen-Sogge for the restrictions of Dirichlet and Neumann eigenfunctions to compact submanifolds of the interior of manifolds with piecewise smooth boundary. This part does not assume ergodicity on the manifolds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.
Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.
NASA Technical Reports Server (NTRS)
Wahba, G.
1982-01-01
Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.
NASA Astrophysics Data System (ADS)
Zhao, Xiaopeng; Zhu, Mingxuan
2018-04-01
In this paper, we consider the small initial data global well-posedness of solutions for the magnetohydrodynamics with Hall and ion-slip effects in R^3. In addition, we also establish the temporal decay estimates for the weak solutions. With these estimates in hand, we study the algebraic time decay for higher-order Sobolev norms of small initial data solutions.
Minimum Wages and the Economic Well-Being of Single Mothers
ERIC Educational Resources Information Center
Sabia, Joseph J.
2008-01-01
Using pooled cross-sectional data from the 1992 to 2005 March Current Population Survey (CPS), this study examines the relationship between minimum wage increases and the economic well-being of single mothers. Estimation results show that minimum wage increases were ineffective at reducing poverty among single mothers. Most working single mothers…
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity.
Santos, Fabiane Igansi de Castro Dos; Marini, Naciele; Santos, Railson Schreinert Dos; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio; de Oliveira, Antonio Costa
2018-01-01
Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here.
Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity
dos Santos, Fabiane Igansi de Castro; Marini, Naciele; dos Santos, Railson Schreinert; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio
2018-01-01
Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here. PMID:29494624
Moving Forward with School Nutrition Policies: A Case Study of Policy Adherence in Nova Scotia.
McIsaac, Jessie-Lee D; Shearer, Cindy L; Veugelers, Paul J; Kirk, Sara F L
2015-12-01
Many Canadian school jurisdictions have developed nutrition policies to promote health and improve the nutritional status of children, but research is needed to clarify adherence, guide practice-related decisions, and move policy action forward. The purpose of this research was to evaluate policy adherence with a review of online lunch menus of elementary schools in Nova Scotia (NS) while also providing transferable evidence for other jurisdictions. School menus in NS were scanned and a list of commonly offered items were categorized, according to minimum, moderate, or maximum nutrition categories in the NS policy. The results of the menu review showed variability in policy adherence that depended on food preparation practices by schools. Although further research is needed to clarify preparation practices, the previously reported challenges of healthy food preparations (e.g., cost, social norms) suggest that many schools in NS are likely not able to use these healthy preparations, signifying potential noncompliance to the policy. Leadership and partnerships are needed among researchers, policy makers, and nutrition practitioners to address the complexity of issues related to food marketing and social norms that influence school food environments to inspire a culture where healthy and nutritious food is available and accessible to children.
The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2018-03-01
This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.
An estimate of the number of tropical tree species
Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.
2015-01-01
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279
NASA Technical Reports Server (NTRS)
Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.
1978-01-01
On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.
Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro
2013-01-01
This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Imbir, Kamil K.
2017-01-01
The Affective Norms for Polish Short Texts (ANPST) dataset (Imbir, 2016d) is a list of 718 affective sentence stimuli with known affective properties with respect to subjectively perceived valence, arousal, dominance, origin, subjective significance, and source. This article examines the reliability of the ANPST and the impact of population type and sex on affective ratings. The ANPST dataset was introduced to provide a recognized method of eliciting affective states with linguistic stimuli more complex than single words and that included contextual information and thus are less ambiguous in interpretation than single word. Analysis of the properties of the ANPST dataset showed that norms collected are reliable in terms of split-half estimation and that the distributions of ratings are similar to those obtained in other affective norms studies. The pattern of correlations was the same as that found in analysis of an affective norms dataset for words based on the same six variables. Female psychology students’ valence ratings were also more polarized than those of their female student peers studying other subjects, but arousal ratings were only higher for negative words. Differences also appeared for all other measured dimensions. Women’s valence ratings were found to be more polarized and arousal ratings were higher than those made by men, and differences were also present for dominance, origin, and subjective significance. The ANPST is the first Polish language list of sentence stimuli and could easily be adapted for other languages and cultures. PMID:28611707
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.
ERIC Educational Resources Information Center
Mincy, Ronald B.
Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…
Minimum Wages and School Enrollment of Teenagers: A Look at the 1990's.
ERIC Educational Resources Information Center
Chaplin, Duncan D.; Turner, Mark D.; Pape, Andreas, D.
2003-01-01
Estimates the effects of higher minimum wages on school enrollment using the Common Core of Data. Controlling for local labor market conditions and state and year fixed effects, finds some evidence that higher minimum wages reduce teen school enrollment in states where students drop out before age 18. (23 references) (Author/PKP)
Hawthorne, Graeme; Korn, Sam; Richardson, Jeff
2013-02-01
To provide Australian health-related quality of life (HRQoL) population norms, based on utility scores from the Assessment of Quality of Life (AQoL) measure, a participant-reported outcomes (PRO) instrument. The data were from the 2007 National Survey of Mental Health and Wellbeing. AQoL scores were analysed by age cohorts, gender, other demographic characteristics, and mental and physical health variables. The AQoL utility score mean was 0.81 (95%CI 0.81-0.82), and 47% obtained scores indicating a very high HRQoL (>0.90). HRQoL gently declined by age group, with older adults' scores indicating lower HRQoL. Based on effect sizes (ESs), there were small losses in HRQoL associated with other demographic variables (e.g. by lack of labour force participation, ES(median) : 0.27). Those with current mental health syndromes reported moderate losses in HRQoL (ES(median) : 0.64), while those with physical health conditions generally also reported moderate losses in HRQoL (ES(median) : 0.41). This study has provided contemporary Australian population norms for HRQoL that may be used by researchers as indicators allowing interpretation and estimation of population health (e.g. estimation of the burden of disease), cross comparison between studies, the identification of health inequalities, and to provide benchmarks for health care interventions. © 2013 The Authors. ANZJPH © 2013 Public Health Association of Australia.
Beaudette, Shawn M; Howarth, Samuel J; Graham, Ryan B; Brown, Stephen H M
2016-10-01
Several different state-space reconstruction methods have been employed to assess the local dynamic stability (LDS) of a 3D kinematic system. One common method is to use a Euclidean norm (N) transformation of three orthogonal x, y, and z time-series' followed by the calculation of the maximum finite-time Lyapunov exponent (λmax) from the resultant N waveform (using a time-delayed state space reconstruction technique). By essentially acting as a weighted average, N has been suggested to account for simultaneous expansion and contraction along separate degrees of freedom within a 3D system (e.g. the coupling of dynamic movements between orthogonal planes). However, when estimating LDS using N, non-linear transformations inherent within the calculation of N should be accounted for. Results demonstrate that the use of N on 3D time-series data with arbitrary magnitudes of relative bias and zero-crossings cause the introduction of error in estimates of λmax obtained through N. To develop a standard for the analysis of 3D dynamic kinematic waveforms, we suggest that all dimensions of a 3D signal be independently shifted to avoid the incidence of zero-crossings prior to the calculation of N and subsequent estimation of LDS through the use of λmax. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Streit, M; Reinhardt, F; Thaller, G; Bennewitz, J
2013-01-01
Genotype by environment interaction (G × E) has been widely reported in dairy cattle. If the environment can be measured on a continuous scale, reaction norms can be applied to study G × E. The average herd milk production level has frequently been used as an environmental descriptor because it is influenced by the level of feeding or the feeding regimen. Another important environmental factor is the level of udder health and hygiene, for which the average herd somatic cell count might be a descriptor. In the present study, we conducted a genome-wide association analysis to identify single nucleotide polymorphisms (SNP) that affect intercept and slope of milk protein yield reaction norms when using the average herd test-day solution for somatic cell score as an environmental descriptor. Sire estimates for intercept and slope of the reaction norms were calculated from around 12 million daughter records, using linear reaction norm models. Sires were genotyped for ~54,000 SNP. The sire estimates were used as observations in the association analysis, using 1,797 sires. Significant SNP were confirmed in an independent validation set consisting of 500 sires. A known major gene affecting protein yield was included as a covariable in the statistical model. Sixty (21) SNP were confirmed for intercept with P ≤ 0.01 (P ≤ 0.001) in the validation set, and 28 and 11 SNP, respectively, were confirmed for slope. Most but not all SNP affecting slope also affected intercept. Comparison with an earlier study revealed that SNP affecting slope were, in general, also significant for slope when the environment was modeled by the average herd milk production level, although the two environmental descriptors were poorly correlated. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Estimating Bodywave Arrivals and Attenuation from Seismic Noise
2009-09-30
power) and Figure 3d the standardized site amplification terms from Savage and Helmberger (2004) who used the Pnl ratio of vertical to radial energy...incident Pnl waves, Bull. Seismol. Soc. Am. 94: 357-362. Shearer, P. (1997). Improving local earthquake locations using the LI norm and waveform
Variation in clutch size in relation to nest size in birds
Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M
2014-01-01
Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes. PMID:25478150
Variation in clutch size in relation to nest size in birds.
Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M
2014-09-01
Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes.
Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2016-12-01
Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.
Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric
2014-01-01
Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044
On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
1997-01-01
In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.
Observations of non-linear plasmon damping in dense plasmas
NASA Astrophysics Data System (ADS)
Witte, B. B. L.; Sperling, P.; French, M.; Recoules, V.; Glenzer, S. H.; Redmer, R.
2018-05-01
We present simulations using finite-temperature density-functional-theory molecular-dynamics to calculate dynamic dielectric properties in warm dense aluminum. The comparison between exchange-correlation functionals in the Perdew, Burke, Ernzerhof approximation, Strongly Constrained and Appropriately Normed Semilocal Density Functional, and Heyd, Scuseria, Ernzerhof (HSE) approximation indicates evident differences in the electron transition energies, dc conductivity, and Lorenz number. The HSE calculations show excellent agreement with x-ray scattering data [Witte et al., Phys. Rev. Lett. 118, 225001 (2017)] as well as dc conductivity and absorption measurements. These findings demonstrate non-Drude behavior of the dynamic conductivity above the Cooper minimum that needs to be taken into account to determine optical properties in the warm dense matter regime.
The primary prevention of alcohol problems: a critical review of the research literature.
Moskowitz, J M
1989-01-01
The research evaluating the effects of programs and policies in reducing the incidence of alcohol problems is critically reviewed. Four types of preventive interventions are examined including: (1) policies affecting the physical, economic and social availability of alcohol (e.g., minimum legal drinking age, price and advertising of alcohol), (2) formal social controls on alcohol-related behavior (e.g., drinking-driving laws), (3) primary prevention programs (e.g., school-based alcohol education), and (4) environmental safety measures (e.g., automobile airbags). The research generally supports the efficacy of three alcohol-specific policies: raising the minimum legal drinking age to 21, increasing alcohol taxes and increasing the enforcement of drinking-driving laws. Also, research suggests that various environmental safety measures reduce the incidence of alcohol-related trauma. In contrast, little evidence currently exists to support the efficacy of primary prevention programs. However, a systems perspective of prevention suggests that prevention programs may become more efficacious after widespread adoption of prevention policies that lead to shifts in social norms regarding use of beverage alcohol.
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
ERIC Educational Resources Information Center
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.
2017-01-01
Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528
Tracking speech comprehension in space and time.
Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D
2006-07-01
A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.
L1 norm based common spatial patterns decomposition for scalp EEG BCI.
Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong
2013-08-06
Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
Ribeiro, Mariana Antunes; dos Reis, Mariana Bisarro; de Moraes, Leonardo Nazário; Briton-Jones, Christine; Rainho, Cláudia Aparecida; Scarano, Wellerson Rodrigo
2014-11-01
Quantitative real-time RT-PCR (qPCR) has proven to be a valuable molecular technique to quantify gene expression. There are few studies in the literature that describe suitable reference genes to normalize gene expression data. Studies of transcriptionally disruptive toxins, like tetrachlorodibenzo-p-dioxin (TCDD), require careful consideration of reference genes. The present study was designed to validate potential reference genes in human Sertoli cells after exposure to TCDD. 32 candidate reference genes were analyzed to determine their applicability. geNorm and NormFinder softwares were used to obtain an estimation of the expression stability of the 32 genes and to identify the most suitable genes for qPCR data normalization.
Evolution of a predator-induced, nonlinear reaction norm.
Carter, Mauricio J; Lind, Martin I; Dennis, Stuart R; Hentley, William; Beckerman, Andrew P
2017-08-30
Inducible, anti-predator traits are a classic example of phenotypic plasticity. Their evolutionary dynamics depend on their genetic basis, the historical pattern of predation risk that populations have experienced and current selection gradients. When populations experience predators with contrasting hunting strategies and size preferences, theory suggests contrasting micro-evolutionary responses to selection. Daphnia pulex is an ideal species to explore the micro-evolutionary response of anti-predator traits because they face heterogeneous predation regimes, sometimes experiencing only invertebrate midge predators and other times experiencing vertebrate fish and invertebrate midge predators. We explored plausible patterns of adaptive evolution of a predator-induced morphological reaction norm. We combined estimates of selection gradients that characterize the various habitats that D. pulex experiences with detail on the quantitative genetic architecture of inducible morphological defences. Our data reveal a fine scale description of daphnid defensive reaction norms, and a strong covariance between the sensitivity to cues and the maximum response to cues. By analysing the response of the reaction norm to plausible, predator-specific selection gradients, we show how in the context of this covariance, micro-evolution may be more uniform than predicted from size-selective predation theory. Our results show how covariance between the sensitivity to cues and the maximum response to cues for morphological defence can shape the evolutionary trajectory of predator-induced defences in D. pulex . © 2017 The Authors.
Schrauf, Robert W; Weintraub, Sandra; Navarro, Ellen
2006-05-01
Adaptations of the National Adult Reading Test (NART) for assessing premorbid intelligence in languages other than English requires (a) generating word-items that are rare and do not follow grapheme-to-phoneme mappings common in that language, and (b) subsequent validation against a cognitive battery normed on the population of interest. Such tests exist for Italy, France, Spain, and Argentina, all normed against national versions of the Wechsler Adult Intelligence Scale. Given the varieties of Spanish spoken in the United States, the adaptation of the Spanish Word Accentuation Test (WAT) requires re-validating the original word list, plus possible new items, against a cognitive battery that has been normed on Spanish-speakers from many countries. This study reports the generation of 55 additional words and revalidation in a sample of 80 older, Spanish-dominant immigrants. The Batería Woodcock-Muñoz Revisada (BWM-R), normed on Spanish speakers from six countries and five U.S. states, was used to establish criterion validity. The original WAT word list accounted for 77% of the variance in the BWM-R and 58% of the variance in Ravens Colored Progressive Matrices, suggesting that the unmodified list possesses adequate predictive validity as an indicator of intelligence. Regression equations are provided for estimating BWM-R and Ravens scores from WAT scores.
Rauhut, Heiko
2013-01-01
Field experiments have shown that observing other people littering, stealing or lying can trigger own misconduct, leading to a decay of social order. However, a large extent of norm violations goes undetected. Hence, the direction of the dynamics crucially depends on actors’ beliefs regarding undetected transgressions. Because undetected transgressions are hardly measureable in the field, a laboratory experiment was developed, where the complete prevalence of norm violations, subjective beliefs about them, and their behavioral dynamics is measurable. In the experiment, subjects could lie about their monetary payoffs, estimate the extent of liars in their group and make subsequent lies contingent on information about other people’s lies. Results show that informed people who underestimate others’ lying increase own lying more than twice and those who overestimate, decrease it by more than half compared to people without information about others’ lies. This substantial interaction puts previous results into perspective, showing that information about others’ transgressions can trigger dynamics in both directions: the spreading of normative decay and restoring of norm adherence. PMID:24236007
Miedema, Stephanie S; Yount, Kathryn M; Chirwa, Esnat; Dunkle, Kristin; Fulu, Emma
2017-02-01
Men's perpetration of gender-based violence remains a global public health issue. Violence prevention experts call for engagement of boys and men to change social norms around masculinity in order to prevent gender-based violence. Yet, men do not comprise a homogenous category. Drawing on probability estimates of men who report same-sex practices and preferences captured in a multi-country gender-based violence prevention survey in the Asia-Pacific region, we test the effects of sexuality-related factors on men's adverse life experiences. We find that sexual minority men face statistically higher risk of lifetime adversity related to gender-based violence, stemming from gender inequitable norms in society. Sexuality is thus a key axis of differentiation among men in the Asia-Pacific region, influencing health and wellbeing and reflecting men's differential engagement with dominant norms of masculinity. Integrating awareness of male sexual diversity into gender-based violence prevention interventions, particularly those that work with boys and men, and bridging violence prevention programming between sexual minority communities and women, are essential to tackle the root drivers of violence.
Block, Stephanie D.; Poplin, Ashlee Burgess; Wang, Eric; Widaman, Keith F.; Runyan, Desmond K.
2016-01-01
Mandated child abuse reporters may judge specific disciplinary practices as unacceptable for young children, whereas child law professionals arbitrating allegations may be less inclusive. Do the views of these groups diverge, by child age, regarding physical discipline? Judgments of community norms across a wide range of children’s ages were obtained from 380 medical and legal professionals. Because the Parent-Child Conflict Tactics Scale (PC-CTS) can be used to assess the epidemiology of child disciplinary behaviors and as a proxy to examine the incidence or prevalence of child abuse, the disciplinary practices described on the PC-CTS were presented as triggers for questions. Significant child age effects were found for disciplinary practices classified as “harsh.” The consistencies between legal and medical professionals were striking. Both groups reflected changes in United States norms, as non-physical approaches were the most approved. We conclude that instruments estimating the prevalence of child maltreatment by parent-report should consider modifying how specific disciplinary practices are classified. PMID:27117603
OCT despeckling via weighted nuclear norm constrained non-local low-rank representation
NASA Astrophysics Data System (ADS)
Tang, Chang; Zheng, Xiao; Cao, Lijuan
2017-10-01
As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.
Blow-up of solutions to a quasilinear wave equation for high initial energy
NASA Astrophysics Data System (ADS)
Li, Fang; Liu, Fang
2018-05-01
This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].
NASA Astrophysics Data System (ADS)
Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.
2017-07-01
This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.
Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography
Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.
2012-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes
Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.
2013-01-01
Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Empirical evidence for acceleration-dependent amplification factors
Borcherdt, R.D.
2002-01-01
Site-specific amplification factors, Fa and Fv, used in current U.S. building codes decrease with increasing base acceleration level as implied by the Loma Prieta earthquake at 0.1g and extrapolated using numerical models and laboratory results. The Northridge earthquake recordings of 17 January 1994 and subsequent geotechnical data permit empirical estimates of amplification at base acceleration levels up to 0.5g. Distance measures and normalization procedures used to infer amplification ratios from soil-rock pairs in predetermined azimuth-distance bins significantly influence the dependence of amplification estimates on base acceleration. Factors inferred using a hypocentral distance norm do not show a statistically significant dependence on base acceleration. Factors inferred using norms implied by the attenuation functions of Abrahamson and Silva show a statistically significant decrease with increasing base acceleration. The decrease is statistically more significant for stiff clay and sandy soil (site class D) sites than for stiffer sites underlain by gravely soils and soft rock (site class C). The decrease in amplification with increasing base acceleration is more pronounced for the short-period amplification factor, Fa, than for the midperiod factor, Fv.
The SME gauge sector with minimum length
NASA Astrophysics Data System (ADS)
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody
2018-04-01
To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.
Luck, Tobias; Pabst, Alexander; Rodriguez, Francisca S; Schroeter, Matthias L; Witte, Veronica; Hinz, Andreas; Mehnert, Anja; Engel, Christoph; Loeffler, Markus; Thiery, Joachim; Villringer, Arno; Riedel-Heller, Steffi G
2018-05-01
To provide new age-, sex-, and education-specific reference values for an extended version of the well-established Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Assessment Battery (CERAD-NAB) that additionally includes the Trail Making Test and the Verbal Fluency Test-S-Words. Norms were calculated based on the cognitive performances of n = 1,888 dementia-free participants (60-79 years) from the population-based German LIFE-Adult-Study. Multiple regressions were used to examine the association of the CERAD-NAB scores with age, sex, and education. In order to calculate the norms, quantile and censored quantile regression analyses were performed estimating marginal means of the test scores at 2.28, 6.68, 10, 15.87, 25, 50, 75, and 90 percentiles for age-, sex-, and education-specific subgroups. Multiple regression analyses revealed that younger age was significantly associated with better cognitive performance in 15 CERAD-NAB measures and higher education with better cognitive performance in all 17 measures. Women performed significantly better than men in 12 measures and men than women in four measures. The determined norms indicate ceiling effects for the cognitive performances in the Boston Naming, Word List Recognition, Constructional Praxis Copying, and Constructional Praxis Recall tests. The new norms for the extended CERAD-NAB will be useful for evaluating dementia-free German-speaking adults in a broad variety of relevant cognitive domains. The extended CERAD-NAB follows more closely the criteria for the new DSM-5 Mild and Major Neurocognitive Disorder. Additionally, it could be further developed to include a test for social cognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Patrick, Megan E.; Lee, Christine M.
2012-01-01
Objective: Given the known risks of alcohol use and sexual behavior for college students on Spring Break, this study was designed to document the behaviors and correlates associated with being on a Spring Break trip on a given day (controlling for average time on a trip). Method: Participants were undergraduate students (n = 261; 55% women) who reported that they planned to go on a Spring Break trip. Web-based survey responses before and after Spring Break documented perceived norms, intentions, and actual behavior on each of the 10 days of Spring Break. Results: Students who went on longer trips, who previously engaged in more heavy episodic drinking, or who had greater pre–Spring Break intentions to drink reported greater alcohol use during Spring Break. Similarly, students with greater pre–Spring Break intentions to have sex, greater perceived norms for sex, or more previoussexual partners had greater odds of having sex. On days students were on trips, they had a greater likelihood of having sex, drinking to higher estimated blood alcohol concentrations, consuming more drinks, and reporting perceived drunkenness than on nontrip days, especially if they had intentions to have sex and drink alcohol (and, for models predicting sexual behavior and drunkenness, had greater perceived norms for sex and drinking). Conclusions: Students who went on Spring Break trips engaged in more risk behaviors. In addition, the context of being on a trip on a given day was associated with increased risk, especially if they had stronger intentions and, in some cases, higher perceived norms. Further research is needed to describe the contexts of Spring Break trips and how to intervene effectively. PMID:22630797
Lithology-dependent minimum horizontal stress and in-situ stress estimate
NASA Astrophysics Data System (ADS)
Zhang, Yushuai; Zhang, Jincai
2017-04-01
Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.
Estimation of sex from the anthropometric ear measurements of a Sudanese population.
Ahmed, Altayeb Abdalla; Omer, Nosyba
2015-09-01
The external ear and its prints have multifaceted roles in medico-legal practice, e.g., identification and facial reconstruction. Furthermore, its norms are essential in the diagnosis of congenital anomalies and the design of hearing aids. Body part dimensions vary in different ethnic groups, so the most accurate statistical estimations of biological attributes are developed using population-specific standards. Sudan lacks comprehensive data about ear norms; moreover, there is a universal rarity in assessing the possibility of sex estimation from ear dimensions using robust statistical techniques. Therefore, this study attempts to establish data for normal adult Sudanese Arabs, assessing the existence of asymmetry and developing a population-specific equation for sex estimation. The study sample comprised 200 healthy Sudanese Arab volunteers (100 males and 100 females) in the age range of 18-30years. The physiognomic ear length and width, lobule length and width, and conchal length and width measurements were obtained by direct anthropometry, using a digital sliding caliper. Moreover, indices and asymmetry were assessed. Data were analyzed using basic descriptive statistics and discriminant function analyses employing jackknife validations of classification results. All linear dimensions used were sexually dimorphic except lobular lengths. Some of the variables and indices show asymmetry. Ear dimensions showed cross-validated sex classification accuracy ranging between 60.5% and 72%. Hence, the ear measurements cannot be used as an effective tool in the estimation of sex. However, in the absence of other more reliable means, it still can be considered a supportive trait in sex estimation. Further, asymmetry should be considered in identification from the ear measurements. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Atif, Muhammad; Sulaiman, Syed Azhar Syed; Shafie, Asrul Akmal; Asif, Muhammad; Ahmad, Nafees
2013-10-01
The aim of the study was to obtain norms of the SF-36v2 health survey and the association of summary component scores with socio-demographic variables in healthy households of tuberculosis (TB) patients. All household members (18 years and above; healthy; literate) of registered tuberculosis patients who came for contact tracing during March 2010 to February 2011 at the respiratory clinic of Penang General Hospital were invited to complete the SF-36v2 health survey using the official translation of the questionnaire in Malay, Mandarin, Tamil and English. Scoring of the questionnaire was done using Quality Metric's QM Certified Scoring Software version 4. Multivariate analysis was conducted to uncover the predictors of physical and mental health. A total of 649 eligible respondents were approached, while 525 agreed to participate in the study (response rate = 80.1 %). Out of consenting respondents, 46.5 % were male and only 5.3 % were over 75 years. Internal consistencies met the minimum criteria (α > 0.7). Reliability coefficients of the scales were always less than their own reliability coefficients. Mean physical component summary scale scores were equivalent to United States general population norms. However, there was a difference of more than three norm-based scoring points for mean mental component summary scores indicating poor mental health. A notable proportion of the respondents was at the risk of depression. Respondents aged 75 years and above (p = 0.001; OR 32.847), widow (p = 0.013; OR 2.599) and postgraduates (p < 0.001; OR 7.865) were predictors of poor physical health while unemployment (p = 0.033; OR 1.721) was the only predictor of poor mental health. The SF-36v2 is a valid instrument to assess HRQoL among the households of TB patients. Study findings indicate the existence of poor mental health and risk of depression among family caregivers of TB patients. We therefore recommend that caregivers of TB patients to be offered intensive support and special attention to cope with these emotional problems.
2014-01-01
Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422
Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu
2014-06-05
Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.
NASA Technical Reports Server (NTRS)
1973-01-01
Analyses and design studies were conducted on the technical and economic feasibility of installing the JT8D-109 refan engine on the DC-9 aircraft. Design criteria included minimum change to the airframe to achieve desired acoustic levels. Several acoustic configurations were studied with two selected for detailed investigations. The minimum selected acoustic treatment configuration results in an estimated aircraft weight increase of 608 kg (1,342 lb) and the maximum selected acoustic treatment configuration results in an estimated aircraft weight increase of 809 kg (1,784 lb). The range loss for the minimum and maximum selected acoustic treatment configurations based on long range cruise at 10 668 m (35,000 ft) altitude with a typical payload of 6 804 kg (15,000 lb) amounts to 54 km (86 n. mi.) respectively. Estimated reduction in EPNL's for minimum selected treatment show 8 EPNdB at approach, 12 EPNdB for takeoff with power cutback, 15 EPNdB for takeoff without power cutback and 12 EPNdB for sideline using FAR Part 36. Little difference was estimated in EPNL between minimum and maximum treatments due to reduced performance of maximum treatment. No major technical problems were encountered in the study. The refan concept for the DC-9 appears technically feasible and economically viable at approximately $1,000,000 per airplane. An additional study of the installation of JT3D-9 refan engine on the DC-8-50/61 and DC-8-62/63 aircraft is included. Three levels of acoustic treatment were suggested for DC-8-50/61 and two levels for DC-8-62/63. Results indicate the DC-8 technically can be retrofitted with refan engines for approximately $2,500,000 per airplane.
Binoculars with mil scale as a training aid for estimating form class
H.W. Camp, J.R.; C.A. Bickford
1949-01-01
In an extensive forest inventory, estimates involving personal judgment cannot be eliminated. However, every means should be taken to keep these estimates to a minimum and to provide on-the-job training that is adequate for obtaining the best estimates possible.
The Highly Adaptive Lasso Estimator
Benkeser, David; van der Laan, Mark
2017-01-01
Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111
NASA Technical Reports Server (NTRS)
Emmons, T. E.
1976-01-01
The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Benefits and risks of adopting the global code of practice for recreational fisheries
Arlinghaus, Robert; Beard, T. Douglas; Cooke, Steven J.; Cowx, Ian G.
2012-01-01
Recreational fishing constitutes the dominant or sole use of many fish stocks, particularly in freshwater ecosystems in Western industrialized countries. However, despite their social and economic importance, recreational fisheries are generally guided by local or regional norms and standards, with few comprehensive policy and development frameworks existing across jurisdictions. We argue that adoption of a recently developed Global Code of Practice (CoP) for Recreational Fisheries can provide benefits for moving recreational fisheries toward sustainability on a global scale. The CoP is a voluntary document, specifically framed toward recreational fisheries practices and issues, thereby complementing and extending the United Nation's Code of Conduct for Responsible Fisheries by the Food and Agricultural Organization. The CoP for Recreational Fisheries describes the minimum standards of environmentally friendly, ethically appropriate, and—depending on local situations—socially acceptable recreational fishing and its management. Although many, if not all, of the provisions presented in the CoP are already addressed through national fisheries legislation and state-based fisheries management regulations in North America, adopting a common framework for best practices in recreational fisheries across multiple jurisdictions would further promote their long-term viability in the face of interjurisdictional angler movements and some expanding threats to the activity related to shifting sociopolitical norms.
A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.
Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin
2018-07-01
Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
The Einstein-Hilbert gravitation with minimum length
NASA Astrophysics Data System (ADS)
Louzada, H. L. C.
2018-05-01
We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
Spectral factorization of wavefields and wave operators
NASA Astrophysics Data System (ADS)
Rickett, James Edward
Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.
Pre- and postprocessing techniques for determining goodness of computational meshes
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Westermann, T.; Bass, J. M.
1993-01-01
Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.
Effect of holding office on the behavior of politicians
Enemark, Daniel; Gibson, Clark C.; McCubbins, Mathew D.; Seim, Brigitte
2016-01-01
Reciprocity is central to our understanding of politics. Most political exchanges—whether they involve legislative vote trading, interbranch bargaining, constituent service, or even the corrupt exchange of public resources for private wealth—require reciprocity. But how does reciprocity arise? Do government officials learn reciprocity while holding office, or do recruitment and selection practices favor those who already adhere to a norm of reciprocity? We recruit Zambian politicians who narrowly won or lost a previous election to play behavioral games that provide a measure of reciprocity. This combination of regression discontinuity and experimental designs allows us to estimate the effect of holding office on behavior. We find that holding office increases adherence to the norm of reciprocity. This study identifies causal effects of holding office on politicians’ behavior. PMID:27856736
Effect of holding office on the behavior of politicians.
Enemark, Daniel; Gibson, Clark C; McCubbins, Mathew D; Seim, Brigitte
2016-11-29
Reciprocity is central to our understanding of politics. Most political exchanges-whether they involve legislative vote trading, interbranch bargaining, constituent service, or even the corrupt exchange of public resources for private wealth-require reciprocity. But how does reciprocity arise? Do government officials learn reciprocity while holding office, or do recruitment and selection practices favor those who already adhere to a norm of reciprocity? We recruit Zambian politicians who narrowly won or lost a previous election to play behavioral games that provide a measure of reciprocity. This combination of regression discontinuity and experimental designs allows us to estimate the effect of holding office on behavior. We find that holding office increases adherence to the norm of reciprocity. This study identifies causal effects of holding office on politicians' behavior.
The research subject as wage earner.
Anderson, James A; Weijer, Charles
2002-01-01
The practice of paying research subjects for participating in clinical trials has yet to receive an adequate moral analysis. Dickert and Grady argue for a wage payment model in which research subjects are paid an hourly wage based on that of unskilled laborers. If we accept this approach, what follows? Norms for just working conditions emerge from workplace legislation and political theory. All workers, including paid research subjects under Dickert and Grady's analysis, have a right to at least minimum wage, a standard work week, extra pay for overtime hours, a safe workplace, no fault compensation for work-related injury, and union organization. If we accept that paid research subjects are wage earners like any other, then the implications for changes to current practice are substantial.
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...
2017-08-08
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Estimating the Effects of Students' Social Networks: Does Attending a Norm-Enforcing School Pay Off?
ERIC Educational Resources Information Center
Carolan, Brian V.
2010-01-01
In an attempt to forge tighter social relations, small school reformers advocate school designs intended to create smaller, more trusting, and more collaborative settings. These efforts to enhance students' social capital in the form of social closure are ultimately tied to improving academic outcomes. Using data derived from ELS: 2002, this study…
Computation of Effect Size for Moderating Effects of Categorical Variables in Multiple Regression
ERIC Educational Resources Information Center
Aguinis, Herman; Pierce, Charles A.
2006-01-01
The computation and reporting of effect size estimates is becoming the norm in many journals in psychology and related disciplines. Despite the increased importance of effect sizes, researchers may not report them or may report inaccurate values because of a lack of appropriate computational tools. For instance, Pierce, Block, and Aguinis (2004)…
ERIC Educational Resources Information Center
Ziomek, Robert L.; Wright, Benjamin D.
Techniques such as the norm-referenced and average score techniques, commonly used in the identification of educationally disadvantaged students, are critiqued. This study applied latent trait theory, specifically the Rasch Model, along with teacher judgments relative to the mastery of instructional/test decisions, to derive a standard setting…
A Handful of Paragraphs on "Translation" and "Norms."
ERIC Educational Resources Information Center
Toury, Gideon
1998-01-01
Presents some thoughts on the issue of translation and norms, focusing on the relationships between social agreements, conventions, and norms; translational norms; acts of translation and translation events; norms and values; norms for translated texts versus norms for non-translated texts; and competing norms. Comments on the reactions to three…
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
The minimum follow-up required for radial head arthroplasty: a meta-analysis.
Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P
2017-12-01
The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Fast calculation of the `ILC norm' in iterative learning control
NASA Astrophysics Data System (ADS)
Rice, Justin K.; van Wingerden, Jan-Willem
2013-06-01
In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.
Benton, Stephen L; Downey, Ronald G; Glider, Peggy J; Benton, Sherry A
2008-11-01
This study examined whether college students' descriptive norm perceptions of protective behavioral drinking strategies explain variance in use of such strategies, controlling for covariates of students' gender, typical number of drinks, and negative drinking consequences. Derivation (n = 7,960; 55.2% women) and replication (n = 8,534; 54.5% women) samples of undergraduate students completed the Campus Alcohol Survey in classroom settings. Students estimated how frequently other students used each of nine protective behavioral strategies (PBS) and how frequently they themselves used each strategy. All items assessing norm perception of PBS (NPPBS) had pattern matrix coefficients exceeding .50 on a single factor, and all contributed to the overall scale reliability (Cronbach's alpha = .81). Hierarchical regression analyses indicated NPPBS explained significant variance in PBS, controlling for covariates, and explained an additional 7% of variance (p < .001). A Gender x Scale (PBS, NPPBS) repeated-measures analysis of variance revealed students believed peers used PBS less frequently than they themselves did (eta(p) (2) = .091, p < .001). Such social distancing was greater in women (omega(effect) (2) = .151, p < .001) than in men (omega(effect) (2) = .001, p < .001). Consistent with the principle of false uniqueness, whereby individuals regard their own positive characteristics as rare, college students-especially women-underestimate how frequently other students use PBS. Such norm misperception may enhance students' feelings of competence and self-esteem. The positive relationship between NPPBS and PBS indicates students with high NPPBS are more likely to use the strategies themselves.
Bignardi, Annaiza Braga; El Faro, Lenira; Pereira, Rodrigo Junqueira; Ayres, Denise Rocha; Machado, Paulo Fernando; de Albuquerque, Lucia Galvão; Santana, Mário Luiz
2015-10-01
Reaction norm models have been widely used to study genotype by environment interaction (G × E) in animal breeding. The objective of this study was to describe environmental sensitivity across first lactation in Brazilian Holstein cows using a reaction norm approach. A total of 50,168 individual monthly test day (TD) milk yields (10 test days) from 7476 complete first lactations of Holstein cattle were analyzed. The statistical models for all traits (10 TDs and for 305-day milk yield) included the fixed effects of contemporary group, age of cow (linear and quadratic effects), and days in milk (linear effect), except for 305-day milk yield. A hierarchical reaction norm model (HRNM) based on the unknown covariate was used. The present study showed the presence of G × E in milk yield across first lactation of Holstein cows. The variation in the heritability estimates implies differences in the response to selection depending on the environment where the animals of this population are evaluated. In the average environment, the heritabilities for all traits were rather similar, in range from 0.02 to 0.63. The scaling effect of G × E predominated throughout most of lactation. Particularly during the first 2 months of lactation, G × E caused reranking of breeding values. It is therefore important to include the environmental sensitivity of animals according to the phase of lactation in the genetic evaluations of Holstein cattle in tropical environments.
A norm knockout method on indirect reciprocity to reveal indispensable norms
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-01-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out. PMID:28276485
A norm knockout method on indirect reciprocity to reveal indispensable norms
NASA Astrophysics Data System (ADS)
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-03-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.
Vector autoregressive models: A Gini approach
NASA Astrophysics Data System (ADS)
Mussard, Stéphane; Ndiaye, Oumar Hamady
2018-02-01
In this paper, it is proven that the usual VAR models may be performed in the Gini sense, that is, on a ℓ1 metric space. The Gini regression is robust to outliers. As a consequence, when data are contaminated by extreme values, we show that semi-parametric VAR-Gini regressions may be used to obtain robust estimators. The inference about the estimators is made with the ℓ1 norm. Also, impulse response functions and Gini decompositions for prevision errors are introduced. Finally, Granger's causality tests are properly derived based on U-statistics.
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina
2010-01-01
Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).
Andersen, Lau M.
2018-01-01
An important aim of an analysis pipeline for magnetoencephalographic data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer the questions of the researcher, while in turn spending minimal effort on the intricacies and machinery of the pipeline. I here present a set of functions and scripts that allow for setting up a clear, reproducible structure for separating raw and processed data into folders and files such that minimal effort can be spend on: (1) double-checking that the right input goes into the right functions; (2) making sure that output and intermediate steps can be accessed meaningfully; (3) applying operations efficiently across groups of subjects; (4) re-processing data if changes to any intermediate step are desirable. Applying the scripts requires only general knowledge about the Python language. The data analyses are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The processing steps covered for the first analysis are filtering the raw data, finding events of interest in the data, epoching data, finding and removing independent components related to eye blinks and heart beats, calculating participants' individual evoked responses by averaging over epoched data and calculating a grand average sensor space representation over participants. The second analysis starts from the participants' individual evoked responses and covers: estimating noise covariance, creating a forward model, creating an inverse operator, estimating distributed source activity on the cortical surface using a minimum norm procedure, morphing those estimates onto a common cortical template and calculating the patterns of activity that are statistically different from baseline. To estimate source activity, processing of the anatomy of subjects based on magnetic resonance imaging is necessary. The necessary steps are covered here: importing magnetic resonance images, segmenting the brain, estimating boundaries between different tissue layers, making fine-resolution scalp surfaces for facilitating co-registration, creating source spaces and creating volume conductors for each subject. PMID:29403349
Andersen, Lau M
2018-01-01
An important aim of an analysis pipeline for magnetoencephalographic data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer the questions of the researcher, while in turn spending minimal effort on the intricacies and machinery of the pipeline. I here present a set of functions and scripts that allow for setting up a clear, reproducible structure for separating raw and processed data into folders and files such that minimal effort can be spend on: (1) double-checking that the right input goes into the right functions; (2) making sure that output and intermediate steps can be accessed meaningfully; (3) applying operations efficiently across groups of subjects; (4) re-processing data if changes to any intermediate step are desirable. Applying the scripts requires only general knowledge about the Python language. The data analyses are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The processing steps covered for the first analysis are filtering the raw data, finding events of interest in the data, epoching data, finding and removing independent components related to eye blinks and heart beats, calculating participants' individual evoked responses by averaging over epoched data and calculating a grand average sensor space representation over participants. The second analysis starts from the participants' individual evoked responses and covers: estimating noise covariance, creating a forward model, creating an inverse operator, estimating distributed source activity on the cortical surface using a minimum norm procedure, morphing those estimates onto a common cortical template and calculating the patterns of activity that are statistically different from baseline. To estimate source activity, processing of the anatomy of subjects based on magnetic resonance imaging is necessary. The necessary steps are covered here: importing magnetic resonance images, segmenting the brain, estimating boundaries between different tissue layers, making fine-resolution scalp surfaces for facilitating co-registration, creating source spaces and creating volume conductors for each subject.
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Regularity estimates up to the boundary for elliptic systems of difference equations
NASA Technical Reports Server (NTRS)
Strikwerda, J. C.; Wade, B. A.; Bube, K. P.
1986-01-01
Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.
Tritium as an indicator of ground-water age in Central Wisconsin
Bradbury, Kenneth R.
1991-01-01
In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.
Vyas, Seema; Heise, Lori
2016-11-01
To explore how area-level socioeconomic status and gender-related norms influence partner violence against women in Tanzania. We analysed data from the 2010 Tanzania Demographic and Health Survey and used multilevel logistic regression to estimate individual and community-level effects on women's risk of current partner violence. Prevalence of current partner violence was 36.1 %; however, variation in prevalence exists across communities. Twenty-nine percent of the variation in the logodds of partner violence is due to community-level influences. When adjusting for individual-level characteristics, this variation falls to 10 % and falls further to 8 % when adjusting for additional community-level factors. Higher levels of women's acceptance towards wife beating, male unemployment, and years of schooling among men were associated with higher risk of partner violence; however, higher levels of women in paid work were associated with lower risk. Area-level poverty and inequitable gender norms were associated with higher risk of partner violence. Empowerment strategies along with addressing social attitudes are likely to achieve reductions in rates of partner violence against women in Tanzania and in other similar low-income country settings.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
NASA Astrophysics Data System (ADS)
Caridi, F.; Marguccio, S.; Durante, G.; Trozzo, R.; Fullone, F.; Belvedere, A.; D'Agostino, M.; Belmusto, G.
2017-01-01
In this article natural radioactivity measurements and dosimetric evaluations in soil samples contaminated by Naturally Occurring Radioactive Materials (NORM) are made, in order to assess any possible radiological hazard for the population and for workers professionally exposed to ionizing radiations. Investigated samples came from the district of Crotone, Calabria region, South of Italy. The natural radioactivity investigation was performed by high-resolution gamma-ray spectrometry. From the measured gamma spectra, activity concentrations were determined for 226Ra , 234-mPa , 224Ra , 228Ac and 40K and compared with their clearance levels for NORM. The total effective dose was calculated for each sample as due to the committed effective dose for inhalation and to the effective dose from external irradiation. The sum of the total effective doses estimated for all investigated samples was compared to the action levels provided by the Italian legislation (D.Lgs.230/95 and subsequent modifications) for the population members (0.3mSv/y) and for professionally exposed workers (1mSv/y). It was found to be less than the limit of no radiological significance (10μSv/y).
The role of cities in reducing smoking in China.
Redmon, Pamela; Koplan, Jeffrey; Eriksen, Michael; Li, Shuyang; Kean, Wang
2014-09-26
China is the epicenter of the global tobacco epidemic. China grows more tobacco, produces more cigarettes, makes more profits from tobacco and has more smokers than any other nation in the world. Approximately one million smokers in China die annually from diseases caused by smoking, and this estimate is expected to reach over two million by 2020. China cities have a unique opportunity and role to play in leading the tobacco control charge from the "bottom up". The Emory Global Health Institute-China Tobacco Control Partnership supported 17 cities to establish tobacco control programs aimed at changing social norms for tobacco use. Program assessments showed the Tobacco Free Cities grantees' progress in establishing tobacco control policies and raising public awareness through policies, programs and education activities have varied from modest to substantial. Lessons learned included the need for training and tailored technical support to build staff capacity and the importance of government and organizational support for tobacco control. Tobacco control, particularly in China, is complex, but the potential for significant public health impact is unparalleled. Cities have a critical role to play in changing social norms of tobacco use, and may be the driving force for social norm change related to tobacco use in China.
Mollborn, Stefanie; Domingue, Benjamin W; Boardman, Jason D
2014-06-01
Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys' perceived norms, while peer network norms predict girls' perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens' likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys' contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors.
Mollborn, Stefanie; Domingue, Benjamin W.; Boardman, Jason D.
2014-01-01
Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys’ perceived norms, while peer network norms predict girls’ perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens’ likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys’ contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors. PMID:25104920
Past primary sex-ratio estimates of 4 populations of Loggerhead sea turtle based on TSP durations.
NASA Astrophysics Data System (ADS)
Monsinjon, Jonathan; Kaska, Yakup; Tucker, Tony; LeBlanc, Anne Marie; Williams, Kristina; Rostal, David; Girondot, Marc
2016-04-01
Ectothermic species are supposed to be strongly affected by climate change and particularly those that exhibit temperature-dependent sex-determination (TSD). Actually, predicting the embryonic response of such organism to incubation-temperature variations in natural conditions remains challenging. In order to assess the vulnerability of sea turtles, primary sex-ratio estimates should be produced at pertinent ecological time and spatial scales. Although information on this important demographic parameter is one of the priorities for conservation purpose, accurate methodology to produce such an estimate is still lacking. The most commonly used method invocates incubation duration as a proxy for sex-ratio. This method is inappropriate because temperature influences incubation duration during all development whereas sex is influenced by temperature during only part of development. The thermosensitive period of development for sex determination (TSP) lies in the middle third of development. A model of embryonic growth must be used to define precisely the position of the TSP at non-constant incubation temperatures. The thermal reaction norm for embryonic growth rate have been estimated for 4 distinct populations of the globally distributed and threatened marine turtle Caretta caretta. A thermal reaction norm describes the pattern of phenotypic expression of a single genotype across a range of temperatures. Moreover, incubation temperatures have been reconstructed for the last 35 years using a multi-correlative model with climate temperature. After development of embryos have been modelled, we estimated the primary sex-ratio based on the duration of the TSP. Our results suggests that Loggerhead sea turtles nesting phenology is linked with the period within which both sexes can be produced in variable proportions. Several hypotheses will be discussed to explain why Caretta caretta could be more resilient to climate change than generally thought for sex determination.
Wang, Shengqiang; Xiao, Cong; Ishizaka, Joji; Qiu, Zhongfeng; Sun, Deyong; Xu, Qian; Zhu, Yuanli; Huan, Yu; Watanabe, Yuji
2016-10-17
Knowledge of phytoplankton community structures is important to the understanding of various marine biogeochemical processes and ecosystem. Fluorescence excitation spectra (F(λ)) provide great potential for studying phytoplankton communities because their spectral variability depends on changes in the pigment compositions related to distinct phytoplankton groups. Commercial spectrofluorometers have been developed to analyze phytoplankton communities by measuring the field F(λ), but estimations using the default methods are not always accurate because of their strong dependence on norm spectra, which are obtained by culturing pure algae of a given group and are assumed to be constant. In this study, we proposed a novel approach for estimating the chlorophyll a (Chl a) fractions of brown algae, cyanobacteria, green algae and cryptophytes based on a data set collected in the East China Sea (ECS) and the Tsushima Strait (TS), with concurrent measurements of in vivo F(λ) and phytoplankton communities derived from pigments analysis. The new approach blends various statistical features by computing the band ratios and continuum-removed spectra of F(λ) without requiring a priori knowledge of the norm spectra. The model evaluations indicate that our approach yields good estimations of the Chl a fractions, with root-mean-square errors of 0.117, 0.078, 0.072 and 0.060 for brown algae, cyanobacteria, green algae and cryptophytes, respectively. The statistical analysis shows that the models are generally robust to uncertainty in F(λ). We recommend using a site-specific model for more accurate estimations. To develop a site-specific model in the ECS and TS, approximately 26 samples are sufficient for using our approach, but this conclusion needs to be validated in additional regions. Overall, our approach provides a useful technical basis for estimating phytoplankton communities from measurements of F(λ).
Norm-Aware Socio-Technical Systems
NASA Astrophysics Data System (ADS)
Savarimuthu, Bastin Tony Roy; Ghose, Aditya
The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions
Eigenbeam analysis of the diversity in bat biosonar beampatterns.
Caspers, Philip; Müller, Rolf
2015-03-01
A quantitative analysis of the interspecific variability in bat biosonar beampatterns has been carried out on 267 numerical predictions of emission and reception beampatterns from 98 different species. Since these beampatterns did not share a common orientation, an alignment was necessary to analyze the variability in the shape of the patterns. To achieve this, beampatterns were aligned using a pairwise optimization framework based on a rotation-dependent cost function. The sum of the p-norms between beam-gain functions across frequency served as a figure of merit. For a representative subset of the data, it was found that all pairwise beampattern alignments resulted in a unique global minimum. This minimum was found to be contained in a subset of all possible beampattern rotations that could be predicted by the overall beam orientation. Following alignment, the beampatterns were decomposed into principal components. The average beampattern consisted of a symmetric, positionally static single lobe that narrows and became progressively asymmetric with increasing frequency. The first three "eigenbeams" controlled the beam width of the beampattern across frequency while higher rank eigenbeams account for symmetry and lobe motion. Reception and emission beampatterns could be distinguished (85% correct classification) based on the first 14 eigenbeams.
Updating estimates of low streamflow statistics to account for possible trends
NASA Astrophysics Data System (ADS)
Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.
2017-12-01
Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.
ERIC Educational Resources Information Center
Brown, Charles; And Others
1983-01-01
The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Neighborhood Context and Binge Drinking by Race and Ethnicity in New York City
Chauhan, Preeti; Ahern, Jennifer; Galea, Sandro; Keyes, Katherine M
2016-01-01
Background Neighborhood context is associated with binge drinking and has significant health, societal, and economic costs. Both binge drinking and neighborhood context vary by race and ethnicity. We examined the relations between neighborhood characteristics —neighborhood norms that are accepting of drunkeness, collective efficacy, and physical disorder — and binge drinking, with a focus on examining race and ethnic-specific relationships. Methods Respondent data were collected through 2005 random digit-dial-telephone survey for a representative sample of New York City residents; neighborhood data were based on the 2005 New York City Housing and Vacancy Survey. Participants were 1,415 past year drinkers; Whites (n = 877), Blacks (n = 292) and Hispanics (n =246). Generalized Estimating Equations (GEE) were used to estimate population average models. Results For the overall sample, neighborhood norms that were more accepting of drunkenness were associated with greater binge drinking (OR = 1.22; 95% CI = 1.09, 1.37); collective efficacy and physical disorder were not significant. However, when examining this by race/ethnicity, greater collective efficacy (OR = 0.75; 95% CI = 0.62, 0.91) and greater physical disorder (OR = 0.76; 95% CI = 0.62, 0.93) were associated with less binge drinking for Whites only. Neighborhood norms that were more accepting of drunkenness were associated with binge drinking among Whites (OR = 1.20; 95% CI = 1.05, 1.38) and, while not significant (perhaps due to power), the associations were similar for Hispanics (OR = 1.18; 95% CI = 0.83, 1.68) and slightly lower for Blacks (OR = 1.11; 95% CI = 0.67, 1.84). Conclusions Overall, results suggest that neighborhood characteristics and binge drinking are shaped, in part, by factors that vary across race/ethnicity. Thus, disaggregating data by race/ethnicity is important in understanding binge drinking behaviors. PMID:26969558
NASA Astrophysics Data System (ADS)
Mueller, B.; Zhang, X.; Zwiers, F. W.
2015-12-01
Global mean temperatures are projected to increase by 3K and 4.9K above pre-industrial levels by 2100 in a moderate stabilization and a high-emission scenario (RCP4.5 and RCP8.5 from CMIP5). However, warming rates are regionally different. In this presentation, we focus on the regions defined by the IPCC SREX report. We investigate the year in the future in which historically hottest summers are projected to become the norm, i.e. to occur at least every other year. Using results from a detection and attribution analysis, we provide probabilistic estimates based on RCP4.5 and RCP8.5 simulations constrained by observations during 1950-2012. We also estimate the fraction of attributable risk (FAR), i.e. the probability for hot summers that is attributable to past emissions of anthropogenic greenhouse gases and aerosols. We find that the FAR is larger than 0.9 in many regions. We project that under RCP4.5, more than half of the world's population will experience the historically hottest summer of the past 63 years with a probability of 50% and 90% by 2035 and 2050, respectively. Under the higher emission scenario RCP8.5, historically hottest summers are projected to be more wide-spread. The Mediterranean region, Western and Eastern Asia, Northern Eurasia and the Sahara are among the first regions for which such hot summers might become the norm. Even under RCP4.5, more than 90% of summers are projected to be hotter than the historically hottest summers by 2025 and 2035 for Sahara and the Mediterranean regions, respectively.
Loss of quality of life associated with genital warts: baseline analyses from a prospective study.
Sénécal, Martin; Brisson, Marc; Maunsell, Elizabeth; Ferenczy, Alex; Franco, Eduardo L; Ratnam, Sam; Coutlée, François; Palefsky, Joel M; Mansi, James A
2011-04-01
The quadrivalent human papillomavirus (HPV) vaccine is effective against HPV types responsible for 90% of anogenital warts. This study estimated the quality of life lost to genital warts using the EQ-5D, a generic instrument widely used for applications in economic analyses. The findings are described in terms that are more specific to individuals with genital warts using psychosocial questions adapted from the HPV impact profile, a measure developed for HPV-related conditions. Between September 2006 and February 2008, 42 physicians across Canada recruited 330 consenting patients 18 years and older with genital warts, either at the first or follow-up visit for an initial or recurrent episode. The quality of life lost associated with genital warts was estimated by the difference between participants' EQ-5D scores and age and gender-specific population norms. The study questionnaire was self-completed by 270 participants who were aged 31.5 years (SD 10.4) on average. The majority of participants were women (53.3%), heterosexual (93.5%) and in a stable relationship (66.0%). Genital warts were associated with detriments in the EQ-5D domains of anxiety/depression, pain/discomfort and usual activities. The absolute difference in the EQ-5D utility score and the EQ-VAS health status between genital warts patients and population norms was 9.9 (95% CI 7.3 to 12.5) and 6.0 (95% CI 4.1 to 7.9) percentage points, respectively. These results did not vary significantly according to patient age, gender, time since first episode or number of episodes. The results suggest that genital warts negatively affect the wellbeing of men and women as reflected by poorer quality of life scores compared with population norms.
Estimation of daily minimum land surface air temperature using MODIS data in southern Iran
NASA Astrophysics Data System (ADS)
Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza
2017-11-01
Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.
ERIC Educational Resources Information Center
Iyer, Vidya V.
2011-01-01
Despite the phenomenal growth projected for the Indian information technology (IT) industry, one of the biggest challenges it faces is the high rate of turnover in offshore supplier firms based in India (Everest Research Group 2011). According to recent estimates, turnover rates among Indian information systems (IS) professionals have been…
Comparing Entering Freshmen's Perceptions of Campus Marijuana and Alcohol Use to Reported Use
ERIC Educational Resources Information Center
Gold, Gregg J.; Nguyen, Alyssa T.
2009-01-01
Use of marijuana and alcohol among current college students (N = 1101) was compared to the perceptions and use of entering freshmen (N = 481) surveyed before the start of classes. Entering freshmen significantly misperceived campus norms for marijuana use, over-estimating that almost every student used in the last 30 days, p less than 0.001.…
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Clark, Margaret S; Lemay, Edward P; Graham, Steven M; Pataki, Sherri P; Finkel, Eli J
2010-07-01
Couples reported on bases for giving support and on relationship satisfaction just prior to and approximately 2 years into marriage. Overall, a need-based, noncontingent (communal) norm was seen as ideal and was followed, and greater use of this norm was linked to higher relationship satisfaction. An exchange norm was seen as not ideal and was followed significantly less frequently than was a communal norm; by 2 years into marriage, greater use of an exchange norm was linked with lower satisfaction. Insecure attachment predicted greater adherence to an exchange norm. Idealization of and adherence to a communal norm dropped slightly across time. As idealization of a communal norm and own use and partner use of a communal norm decreased, people high in avoidance increased their use of an exchange norm, whereas people low in avoidance decreased their use of an exchange norm. Anxious individuals evidenced tighter links between norm use and marital satisfaction relative to nonanxious individuals. Overall, a picture of people valuing a communal norm and striving toward adherence to a communal norm emerged, with secure individuals doing so with more success and equanimity across time than insecure individuals.
NASA Astrophysics Data System (ADS)
Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre
2014-12-01
In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.
Hauk, O; Patterson, K; Woollams, A; Watling, L; Pulvermüller, F; Rogers, T T
2006-05-01
Using a speeded lexical decision task, event-related potentials (ERPs), and minimum norm current source estimates, we investigated early spatiotemporal aspects of cortical activation elicited by words and pseudo-words that varied in their orthographic typicality, that is, in the frequency of their component letter pairs (bi-grams) and triplets (tri-grams). At around 100 msec after stimulus onset, the ERP pattern revealed a significant typicality effect, where words and pseudo-words with atypical orthography (e.g., yacht, cacht) elicited stronger brain activation than items characterized by typical spelling patterns (cart, yart). At approximately 200 msec, the ERP pattern revealed a significant lexicality effect, with pseudo-words eliciting stronger brain activity than words. The two main factors interacted significantly at around 160 msec, where words showed a typicality effect but pseudo-words did not. The principal cortical sources of the effects of both typicality and lexicality were localized in the inferior temporal cortex. Around 160 msec, atypical words elicited the stronger source currents in the left anterior inferior temporal cortex, whereas the left perisylvian cortex was the site of greater activation to typical words. Our data support distinct but interactive processing stages in word recognition, with surface features of the stimulus being processed before the word as a meaningful lexical entry. The interaction of typicality and lexicality can be explained by integration of information from the early form-based system and lexicosemantic processes.
Raising Awareness on Heat Related Mortality in Bangladesh
NASA Astrophysics Data System (ADS)
Arrighi, J.; Burkart, K.; Nissan, H.
2017-12-01
Extreme heat is the leading cause of weather-related deaths in the United States and Europe, and was responsible for four of the ten deadliest natural disasters worldwide in 2015. Near the tropics, where hot weather is considered the norm, perceived heat risk is often low, but recent heat waves in South Asia have caught the attention of the health community, policy-makers and the public. In a recent collaboration between the Red Cross Red Crescent Climate Centre, Columbia University and BBC Media Action the effects of extreme heat in Bangladesh were analyzed and the findings were subsequently used as a basis to raise awareness about the impacts of extreme heat on the most vulnerable, to the general public. Analysis of excess heat in Bangladesh between 2003 and 2007 showed that heatwaves occur between April and June with most extreme heat events occurring in May. Between 2003 and 2007 it is estimated that an average of 1500 people died per year due to heatwaves lasting three days or longer, with an eight-day heatwave in 2005 resulting in a minimum of 3,800 excess deaths. Utilizing these findings BBC Media Action launched an online communications campaign in May 2017 ultimately reaching approximately 3.9 million people with information on reducing the impacts of extreme heat. This presentation will highlight key findings from the study of heat related mortality in Bangladesh as well as highlight the benefit of collaboration between scientists and communicators for increasing awareness about the effects of extreme heat on the most vulnerable.
The inverse problem in electroencephalography using the bidomain model of electrical activity.
Lopez Rincon, Alejandro; Shimoda, Shingo
2016-12-01
Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.
2011-01-01
Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.
NASA Astrophysics Data System (ADS)
Wen, Huanyao; Zhu, Limei
2018-02-01
In this paper, we consider the Cauchy problem for a two-phase model with magnetic field in three dimensions. The global existence and uniqueness of strong solution as well as the time decay estimates in H2 (R3) are obtained by introducing a new linearized system with respect to (nγ -n˜γ , n - n ˜ , P - P ˜ , u , H) for constants n ˜ ≥ 0 and P ˜ > 0, and doing some new a priori estimates in Sobolev Spaces to get the uniform upper bound of (n - n ˜ ,nγ -n˜γ) in H2 (R3) norm.
Decay estimates of solutions to the bipolar non-isentropic compressible Euler-Maxwell system
NASA Astrophysics Data System (ADS)
Tan, Zhong; Wang, Yong; Tong, Leilei
2017-10-01
We consider the global existence and large time behavior of solutions near a constant equilibrium state to the bipolar non-isentropic compressible Euler-Maxwell system in {R}3 , where the background magnetic field could be non-zero. The global existence is established under the assumption that the H 3 norm of the initial data is small, but its higher order derivatives could be large. Combining the negative Sobolev (or Besov) estimates with the interpolation estimates, we prove the optimal time decay rates of the solution and its higher order spatial derivatives. In this sense, our results improve the similar ones in Wang et al (2012 SIAM J. Math. Anal. 44 3429-57).
Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume
2012-01-01
We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine
2017-07-01
Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.
Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.
1987-01-01
The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
Relative dynamics and motion control of nanosatellite formation flying
NASA Astrophysics Data System (ADS)
Pimnoo, Ammarin; Hiraki, Koju
2016-04-01
Orbit selection is a necessary factor in nanosatellite formation mission design/meanwhile, to keep the formation, it is necessary to consume fuel. Therefore, the best orbit design for nanosatellite formation flying should be one that requires the minimum fuel consumption. The purpose of this paper is to analyse orbit selection with respect to the minimum fuel consumption, to provide a convenient way to estimate the fuel consumption for keeping nanosatellite formation flying and to present a simplified method of formation control. The formation structure is disturbed by J2 gravitational perturbation and other perturbing accelerations such as atmospheric drag. First, Gauss' Variation Equations (GVE) are used to estimate the essential ΔV due to the J2 perturbation and atmospheric drag. The essential ΔV presents information on which orbit is good with respect to the minimum fuel consumption. Then, the linear equations which account for J2 gravitational perturbation of Schweighart-Sedwick are presented and used to estimate the fuel consumption to maintain the formation structure. Finally, the relative dynamics motion is presented as well as a simplified motion control of formation structure by using GVE.
Elsworth, Gerald R; Osborne, Richard H
2017-01-01
Objective: Participant self-report data play an essential role in the evaluation of health education activities, programmes and policies. When questionnaire items do not have a clear mapping to a performance-based continuum, percentile norms are useful for communicating individual test results to users. Similarly, when assessing programme impact, the comparison of effect sizes for group differences or baseline to follow-up change with effect sizes observed in relevant normative data provides more directly useful information compared with statistical tests of mean differences and the evaluation of effect sizes for substantive significance using universal rule-of-thumb such as those for Cohen’s ‘d’. This article aims to assist managers, programme staff and clinicians of healthcare organisations who use the Health Education Impact Questionnaire interpret their results using percentile norms for individual baseline and follow-up scores together with group effect sizes for change across the duration of typical chronic disease self-management and support programme. Methods: Percentile norms for individual Health Education Impact Questionnaire scale scores and effect sizes for group change were calculated using freely available software for each of the eight Health Education Impact Questionnaire scales. Data used were archived responses of 2157 participants of chronic disease self-management programmes conducted by a wide range of organisations in Australia between July 2007 and March 2013. Results: Tables of percentile norms and three possible effect size benchmarks for baseline to follow-up change are provided together with two worked examples to assist interpretation. Conclusion: While the norms and benchmarks presented will be particularly relevant for Australian organisations and others using the English-language version of the Health Education Impact Questionnaire, they will also be useful for translated versions as a guide to the sensitivity of the scales and the extent of the changes that might be anticipated from attendance at a typical chronic disease self-management or health education programme. PMID:28560039
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
Si, Xingfeng; Kays, Roland
2014-01-01
Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Quinn, Patrick D.; Fromme, Kim
2011-01-01
Objective: Although alcohol use and related problems are highly prevalent in emerging adulthood overall, college students drink somewhat more than do their peers who do not attend college. The personal or social influences underlying this difference, however, are not yet well understood. The present study examined whether personality traits (i.e., self-regulation and sensation seeking) and peer influence (i.e., descriptive drinking norms) contributed to student status differences. Method: At approximately age 22, 4-year college students (n = 331) and noncollege emerging adults (n = 502) completed web-based surveys, including measures of alcohol use, alcohol-related problems, personality, and social norms. Results: College students drank only slightly more heavily. This small difference, however, reflected personality suppression. College students were lower in trait-based risk for drinking, and accounting for traits revealed a stronger positive association between attending college and drinking more heavily. Although noncollege emerging adults reported greater descriptive drinking norms for social group members, norms appeared to more strongly influence alcohol use among college students. Finally, despite drinking less, noncollege individuals experienced more alcohol-related problems. Conclusions: The association between attending college and drinking heavily may be larger than previously estimated, and it may be masked by biased selection into college as a function of both self-regulation and sensation seeking. Differing patterns of alcohol use, its predictors, and its consequences emerged for the college and noncollege samples, suggesting that differing intervention strategies may best meet the needs of each population. PMID:21683044
Radiological issues associated with the recent boom in oil and gas hydraulic fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Alejandro
As the worldwide hydraulic fracturing 'fracking' market continued to grow to an estimated $37 Billion in 2012, the need to understand and manage radiological issues associated with fracking is becoming imperative. Fracking is a technique that injects pressurized fluid into rock layer to propagate fractures that allows natural gas and other petroleum products to be more easily extracted. Radioactivity is associated with fracking in two ways. Radioactive tracers are frequently a component of the injection fluid used to determine the injection profile and locations of fractures. Second, because there are naturally-occurring radioactive materials (NORM) in the media surrounding and containingmore » oil and gas deposits, the process of fracking can dislodge radioactive materials and transport them to the surface in the wastewater and gases. Treatment of the wastewater to remove heavy metals and other contaminates can concentrate the NORM into technologically-enhanced NORM (TENORM). Regulations to classify, transport, and dispose of the TENORM and other radioactive waste can be complicated and cumbersome and vary widely in the international community and even between states/provinces. In many cases, regulations on NORM and TENORM do not even exist. Public scrutiny and regulator pressure will only continue to increase as the world demands on oil and gas continue to rise and greater quantities of TENORM materials are produced. Industry experts, health physicists, regulators, and public communities must work together to understand and manage radiological issues to ensure reasonable and effective regulations protective of the public, environment, and worker safety and health are implemented. (authors)« less
The size of the irregular migrant population in the European Union – counting the uncountable?
Vogel, Dita; Kovacheva, Vesela; Prescott, Hannah
2011-01-01
It is difficult to estimate the size of the irregular migrant population in a specific city or country, and even more difficult to arrive at estimates at the European level. A review of past attempts at European-level estimates reveals that they rely on rough and outdated rules-of-thumb. In this paper, we present our own European level estimates for 2002, 2005, and 2008. We aggregate country-specific information, aiming at approximate comparability by consistent use of minimum and maximum estimates and by adjusting for obvious differences in definition and timescale. While the aggregated estimates are not considered highly reliable, they do -- for the first time -- provide transparency. The provision of more systematic medium quality estimates is shown to be the most promising way for improvement. The presented estimate indicates a minimum of 1.9 million and a maximum of 3.8 million irregular foreign residents in the 27 member states of the European Union (2008). Unlike rules-of-thumb, the aggregated EU estimates indicate a decline in the number of irregular foreign residents between 2002 and 2008. This decline has been influenced by the EU enlargement and legalisation programmes.
Using Peer Injunctive Norms to Predict Early Adolescent Cigarette Smoking Intentions
Zaleski, Adam C.; Aloise-Young, Patricia A.
2013-01-01
The present study investigated the importance of the perceived injunctive norm to predict early adolescent cigarette smoking intentions. A total of 271 6th graders completed a survey that included perceived prevalence of friend smoking (descriptive norm), perceptions of friends’ disapproval of smoking (injunctive norm), and future smoking intentions. Participants also listed their five best friends, in which the actual injunctive norm was calculated. Results showed that smoking intentions were significantly correlated with the perceived injunctive norm but not with the actual injunctive norm. Secondly, the perceived injunctive norm predicted an additional 3.4% of variance in smoking intentions above and beyond the perceived descriptive norm. These results demonstrate the importance of the perceived injunctive norm in predicting early adolescent smoking intentions. PMID:24078745
Social norms and their influence on eating behaviours.
Higgs, Suzanne
2015-03-01
Social norms are implicit codes of conduct that provide a guide to appropriate action. There is ample evidence that social norms about eating have a powerful effect on both food choice and amounts consumed. This review explores the reasons why people follow social eating norms and the factors that moderate norm following. It is proposed that eating norms are followed because they provide information about safe foods and facilitate food sharing. Norms are a powerful influence on behaviour because following (or not following) norms is associated with social judgements. Norm following is more likely when there is uncertainty about what constitutes correct behaviour and when there is greater shared identity with the norm referent group. Social norms may affect food choice and intake by altering self-perceptions and/or by altering the sensory/hedonic evaluation of foods. The same neural systems that mediate the rewarding effects of food itself are likely to reinforce the following of eating norms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Behavioral and physiological significance of minimum resting metabolic rate in king penguins.
Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y
2008-01-01
Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.
Reference genes for reverse transcription quantitative PCR in canine brain tissue.
Stassen, Quirine E M; Riemers, Frank M; Reijmerink, Hannah; Leegwater, Peter A J; Penning, Louis C
2015-12-09
In the last decade canine models have been used extensively to study genetic causes of neurological disorders such as epilepsy and Alzheimer's disease and unravel their pathophysiological pathways. Reverse transcription quantitative polymerase chain reaction is a sensitive and inexpensive method to study expression levels of genes involved in disease processes. Accurate normalisation with stably expressed so-called reference genes is crucial for reliable expression analysis. Following the minimum information for publication of quantitative real-time PCR experiments precise guidelines, the expression of ten frequently used reference genes, namely YWHAZ, HMBS, B2M, SDHA, GAPDH, HPRT, RPL13A, RPS5, RPS19 and GUSB was evaluated in seven brain regions (frontal lobe, parietal lobe, occipital lobe, temporal lobe, thalamus, hippocampus and cerebellum) and whole brain of healthy dogs. The stability of expression varied between different brain areas. Using the GeNorm and Normfinder software HMBS, GAPDH and HPRT were the most reliable reference genes for whole brain. Furthermore based on GeNorm calculations it was concluded that as little as two to three reference genes are sufficient to obtain reliable normalisation, irrespective the brain area. Our results amend/extend the limited previously published data on canine brain reference genes. Despite the excellent expression stability of HMBS, GAPDH and HRPT, the evaluation of expression stability of reference genes must be a standard and integral part of experimental design and subsequent data analysis.
Robot map building based on fuzzy-extending DSmT
NASA Astrophysics Data System (ADS)
Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun
2007-11-01
With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.
Muslim women's narratives about bodily change and care during critical illness: a qualitative study.
Zeilani, Ruqayya; Seymour, Jane E
2012-03-01
To explore experiences of Jordanian Muslim women in relation to bodily change during critical illness. A longitudinal narrative approach was used. A purposive sample of 16 Jordanian women who had spent a minimum of 48 hr in intensive care participated in one to three interviews over a 6-month period. Three main categories emerged from the analysis: the dependent body reflects changes in the women's bodily strength and performance, as they moved from being care providers into those in need of care; this was associated with experiences of a sense of paralysis, shame, and burden. The social body reflects the essential contribution that family help or nurses' support (as a proxy for family) made to women's adjustment to bodily change and their ability to make sense of their illness. The cultural body reflects the effect of cultural norms and Islamic beliefs on the women's interpretation of their experiences and relates to the women's understandings of bodily modesty. This study illustrates, by in-depth focus on Muslim women's narratives, the complex interrelationship between religious beliefs, cultural norms, and the experiences and meanings of bodily changes during critical illness. This article provides insights into vital aspects of Muslim women's needs and preferences for nursing care. It highlights the importance of including an assessment of culture and spiritual aspects when nursing critically ill patients. © 2011 Sigma Theta Tau International.
An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.
Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco
2017-04-01
In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.
Inversion of Magnetic Measurements of the CHAMP Satellite Over the Pannonian Basin
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, P. T.; Wittmann, G.; Toronyi, B.; Puszta, S.
2011-01-01
The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5 x 0.5, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude were downward continued to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We INTERPRET THAT the magnetic anomaly WAS produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.
Siupsinskiene, Nora; Lycke, Hugo
2011-07-01
This prospective cross-sectional study examines the effects of voice training on vocal capabilities in vocally healthy age and gender differentiated groups measured by voice range profile (VRP) and speech range profile (SRP). Frequency and intensity measurements of the VRP and SRP using standard singing and speaking voice protocols were derived from 161 trained choir singers (21 males, 59 females, and 81 prepubescent children) and from 188 nonsingers (38 males, 89 females, and 61 children). When compared with nonsingers, both genders of trained adult and child singers exhibited increased mean pitch range, highest frequency, and VRP area in high frequencies (P<0.05). Female singers and child singers also showed significantly increased mean maximum voice intensity, intensity range, and total VRP area. The logistic regression analysis showed that VRP pitch range, highest frequency, maximum voice intensity, and maximum-minimum intensity range, and SRP slope of speaking curve were the key predictors of voice training. Age, gender, and voice training differentiated norms of VRP and SRP parameters are presented. Significant positive effect of voice training on vocal capabilities, mostly singing voice, was confirmed. The presented norms for trained singers, with key parameters differentiated by gender and age, are suggested for clinical practice of otolaryngologists and speech-language pathologists. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y
2004-10-01
Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.
First-Order System Least-Squares for the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Dynamic characteristics of oxygen consumption.
Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven
2018-04-23
Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.
Attitude Error Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2002-01-01
The quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation. The quaternion must obey a unit norm constraint, though, which has led to the development of an extended Kalman filter using a quaternion for the global attitude estimate and a three-component representation for attitude errors. We consider various attitude error representations for this Multiplicative Extended Kalman Filter and its second-order extension.
Sequential Analysis: Hypothesis Testing and Changepoint Detection
2014-07-11
it is necessary to estimate in situ the geographical coordinates and other parameters of earthquakes . The standard sensor equipment of a three...components. When an earthquake arises, the sensors begin to record several types of seismic waves (body and surface waves), among which the more important...machines and to increased safety norms. Many structures to be monitored, e.g., civil engineering structures subject to wind and earthquakes , aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Trends in annual minimum exposed snow and ice cover in High Mountain Asia from MODIS
NASA Astrophysics Data System (ADS)
Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Racoviteanu, Adina; Armstrong, Richard; Dozier, Jeff
2016-04-01
Though a relatively short record on climatological scales, data from the Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000-2014 can be used to evaluate changes in the cryosphere and provide a robust baseline for future observations from space. We use the MODIS Snow Covered Area and Grain size (MODSCAG) algorithm, based on spectral mixture analysis, to estimate daily fractional snow and ice cover and the MODICE Persistent Ice (MODICE) algorithm to estimate the annual minimum snow and ice fraction (fSCA) for each year from 2000 to 2014 in High Mountain Asia. We have found that MODSCAG performs better than other algorithms, such as the Normalized Difference Index (NDSI), at detecting snow. We use MODICE because it minimizes false positives (compared to maximum extents), for example, when bright soils or clouds are incorrectly classified as snow, a common problem with optical satellite snow mapping. We analyze changes in area using the annual MODICE maps of minimum snow and ice cover for over 15,000 individual glaciers as defined by the Randolph Glacier Inventory (RGI) Version 5, focusing on the Amu Darya, Syr Darya, Upper Indus, Ganges, and Brahmaputra River basins. For each glacier with an area of at least 1 km2 as defined by RGI, we sum the total minimum snow and ice covered area for each year from 2000 to 2014 and estimate the trends in area loss or gain. We find the largest loss in annual minimum snow and ice extent for 2000-2014 in the Brahmaputra and Ganges with 57% and 40%, respectively, of analyzed glaciers with significant losses (p-value<0.05). In the Upper Indus River basin, we see both gains and losses in minimum snow and ice extent, but more glaciers with losses than gains. Our analysis shows that a smaller proportion of glaciers in the Amu Darya and Syr Darya are experiencing significant changes in minimum snow and ice extent (3.5% and 12.2%), possibly because more of the glaciers in this region are smaller than 1 km2 than in the Indus, Ganges, and Brahmaputra making analysis from MODIS (pixel area ~0.25 km2) difficult. Overall, we see 23% of the glaciers in the 5 river basins with significant trends (in either direction). We relate these changes in area to topography and climate to understand the driving processes related to these changes. In addition to annual minimum snow and ice cover, the MODICE algorithm also provides the date of minimum fSCA for each pixel. To determine whether the surface was snow or ice we use the date of minimum fSCA from MODICE to index daily maps of snow on ice (SOI), or exposed glacier ice (EGI) and systematically derive an equilibrium line altitude (ELA) for each year from 2000-2014. We test this new algorithm in the Upper Indus basin and produce annual estimates of ELA. For the Upper Indus basin we are deriving annual ELAs that range from 5350 m to 5450 m which is slightly higher than published values of 5200 m for this region.
NASA Astrophysics Data System (ADS)
Croitoru, Madalina; Oren, Nir; Miles, Simon; Luck, Michael
Norms impose obligations, permissions and prohibitions on individual agents operating as part of an organisation. Typically, the purpose of such norms is to ensure that an organisation acts in some socially (or mutually) beneficial manner, possibly at the expense of individual agent utility. In this context, agents are normaware if they are able to reason about which norms are applicable to them, and to decide whether to comply with or ignore them. While much work has focused on the creation of norm-aware agents, much less has been concerned with aiding system designers in understanding the effects of norms on a system. The ability to understand such norm effects can aid the designer in avoiding incorrect norm specification, eliminating redundant norms and reducing normative conflict. In this paper, we address the problem of norm understanding by providing explanations as to why a norm is applicable, violated, or in some other state. We make use of conceptual graph based semantics to provide a graphical representation of the norms within a system. Given knowledge of the current and historical state of the system, such a representation allows for explanation of the state of norms, showing for example why they may have been activated or violated.
Reid, Allecia E.; Taber, Jennifer M.; Ferrer, Rebecca A.; Biesecker, Barbara B.; Lewis, Katie L.; Biesecker, Leslie G.; Klein, William M. P.
2018-01-01
Objective Genomic sequencing is becoming increasingly accessible, highlighting the need to understand the social and psychological factors that drive interest in receiving testing results. These decisions may depend on perceived descriptive norms (how most others behave) and injunctive norms (what is approved of by others). We predicted that descriptive norms would be directly associated with intentions to learn genomic sequencing results, whereas injunctive norms would be associated indirectly, via attitudes. These differential associations with intentions versus attitudes were hypothesized to be strongest when individuals held ambivalent attitudes toward obtaining results. Methods Participants enrolled in a genomic sequencing trial (n=372) reported intentions to learn medically actionable, non-medically actionable, and carrier sequencing results. Descriptive norms items referenced other study participants. Injunctive norms were analyzed separately for close friends and family members. Attitudes, attitudinal ambivalence, and sociodemographic covariates were also assessed. Results In structural equation models, both descriptive norms and friend injunctive norms were associated with intentions to receive all sequencing results (ps<.004). Attitudes consistently mediated all friend injunctive norms-intentions associations, but not the descriptive norms-intentions associations. Attitudinal ambivalence moderated the association between friend injunctive norms (p≤.001), but not descriptive norms (p=.16), and attitudes. Injunctive norms were significantly associated with attitudes when ambivalence was high, but were unrelated when ambivalence was low. Results replicated for family injunctive norms. Conclusions Descriptive and injunctive norms play roles in genomic sequencing decisions. Considering mediators and moderators of these processes enhances ability to optimize use of normative information to support informed decision making. PMID:29745680
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
Reassessing Wind Potential Estimates for India: Economic and Policy Implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet
2011-09-15
We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (nomore » farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.« less
Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1982-01-01
Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.
2017-01-01
In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717
Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan
2017-03-01
The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.
Yang, Bo
2018-06-01
Based on the theory of normative social behavior (Rimal & Real, 2005), this study examined the effects of descriptive norms, close versus distal peer injunctive norms, and interdependent self-construal on college students' intentions to consume alcohol. Results of a cross-sectional study conducted among U.S. college students (N = 581) found that descriptive norms, close, and distal peer injunctive norms had independent effects on college students' intentions to consume alcohol. Furthermore, close peer injunctive norms moderated the effects of descriptive norms on college students' intentions to consume alcohol and the interaction showed different patterns among students with a strong and weak interdependent self-construal. High levels of close peer injunctive norms weakened the relationship between descriptive norms and intentions to consume alcohol among students with a strong interdependent self-construal but strengthened the relationship between descriptive norms and intentions to consume alcohol among students with a weak interdependent self-construal. Implications of the findings for norms-based research and college drinking interventions are discussed.
Reconstructing the duty of water: a study of emergent norms in socio-hydrology
NASA Astrophysics Data System (ADS)
Wescoat, J. L., Jr.
2013-12-01
This paper assesses the changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, or more precisely the history of socio-hydrologic ideas, a line of research that is useful for interpreting and anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late 18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. The duty of water concept in this context articulated a form of political power, as did related irrigation engineering concepts such as "command" and "regime". Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for private water rights appropriation (e.g., only 40 to 80 acres per cfs). While both concepts of duty addressed socio-economic values associated with irrigation, the western US linked duty with justifications for, and limits of, water ownership. The final sections show that while the duty of water concept has been eclipsed in practice by other measures, standards, and values of water use efficiency, it has continuing relevance for examining ethical duties and for anticipating, if not predicting, emerging social values with respect to water.
Foster, Dawn W.; Neighbors, Clayton; Krieger, Heather
2015-01-01
Objectives This study assessed descriptive and injunctive norms, evaluations of alcohol consequences, and acceptability of drinking. Methods Participants were 248 heavy-drinking undergraduates (81.05% female; Mage = 23.45). Results Stronger perceptions of descriptive and injunctive norms for drinking and more positive evaluations of alcohol consequences were positively associated with drinking and the number of drinks considered acceptable. Descriptive and injunctive norms interacted, indicating that injunctive norms were linked with number of acceptable drinks among those with higher descriptive norms. Descriptive norms and evaluations of consequences interacted, indicating that descriptive norms were positively linked with number of acceptable drinks among those with negative evaluations of consequences; however, among those with positive evaluations of consequences, descriptive norms were negatively associated with number of acceptable drinks. Injunctive norms and evaluations of consequences interacted, indicating that injunctive norms were positively associated with number of acceptable drinks, particularly among those with positive evaluations of consequences. A three-way interaction emerged between injunctive and descriptive norms and evaluations of consequences, suggesting that injunctive norms and the number of acceptable drinks were positively associated more strongly among those with negative versus positive evaluations of consequences. Those with higher acceptable drinks also had positive evaluations of consequences and were high in injunctive norms. Conclusions Findings supported hypotheses that norms and evaluations of alcohol consequences would interact with respect to drinking and acceptance of drinking. These examinations have practical utility and may inform development and implementation of interventions and programs targeting alcohol misuse among heavy drinking undergraduates. PMID:25437265
An advanced algorithm for deformation estimation in non-urban areas
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-09-01
This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.
Method for matching customer and manufacturer positions for metal product parameters standardization
NASA Astrophysics Data System (ADS)
Polyakova, Marina; Rubin, Gennadij; Danilova, Yulija
2018-04-01
Decision making is the main stage of regulation the relations between customer and manufacturer during the design the demands of norms in standards. It is necessary to match the positions of the negotiating sides in order to gain the consensus. In order to take into consideration the differences of customer and manufacturer estimation of the object under standardization process it is obvious to use special methods of analysis. It is proposed to establish relationships between product properties and its functions using functional-target analysis. The special feature of this type of functional analysis is the consideration of the research object functions and properties. It is shown on the example of hexagonal head crew the possibility to establish links between its functions and properties. Such approach allows obtaining a quantitative assessment of the closeness the positions of customer and manufacturer at decision making during the standard norms establishment.
NASA Astrophysics Data System (ADS)
Advocate, Dev L.
The matter of the viscosity of the mantle has started to become serious. In 1935, Norm Haskell estimated the viscosity to be about 1020 poise and there the matter stood for about half a century. For a little while, people worried about excess ellipticity of the Earth and attributed this to a “fossil bulge” that lagged the rotation rate. For this same little while, 1025 poise was thought to be the viscosity of the lower mantle, but then it was discovered that the equator was also out of shape by about the same amount, ruling out the “fossil bulge” idea. To cover their embarrassment, geodynamicists upped the viscosity of the mantle to 1021 by adopting S.I. (Satan's Invention) units. No one noticed for some time since it didn't really matter whether viscosity was given in stokes, poise, or pascal seconds. It was just a large number with a large uncertainty and no one had a feel for it anyway.
Mena, Carlos; Fuentes, Eduardo; Ormazábal, Yony; Palomo, Iván
2017-05-11
The global percentage of people over 60 is strongly increasing and estimated to exceed 20% by 20,150, which means that there will be an increase in many pathological conditions related to aging. Mapping of the location of aging people and identification of their needs can be extremely valuable from a social-economic point of view. Participants in this study were 148 randomly selected adults from Talca City, Chile aged 60-74 at baseline. Geographic information systems (GIS) analyses were performed using ArcGIS software through its module Spatial Autocorrelation. In this study, we demonstrated that elderly people show geographic clustering according to above-norm results of anthropometric measurements and blood chemistry. The spatial identifications found would facilitate exploring the impact of treatment programmes in communities where many aging people live, thereby improving their quality of life as well as reducing overall costs.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Dunca, Argus A.
2017-12-01
This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.
Algamal, Z Y; Lee, M H
2017-01-01
A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Flint, L.E.; Flint, A.L.
2008-01-01
Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Doornwaard, Suzan M.; ter Bogt, Tom F. M.; Reitz, Ellen; van den Eijnden, Regina J. J. M.
2015-01-01
Research on the role of sex-related Internet use in adolescents’ sexual development has often isolated the Internet and online behaviors from other, offline influencing factors in adolescents’ lives, such as processes in the peer domain. The aim of this study was to test an integrative model explaining how receptive (i.e., use of sexually explicit Internet material [SEIM]) and interactive (i.e., use of social networking sites [SNS]) sex-related online behaviors interrelate with perceived peer norms in predicting adolescents’ experience with sexual behavior. Structural equation modeling on longitudinal data from 1,132 Dutch adolescents (Mage T1 = 13.95; range 11-17; 52.7% boys) demonstrated concurrent, direct, and indirect effects between sex-related online behaviors, perceived peer norms, and experience with sexual behavior. SEIM use (among boys) and SNS use (among boys and girls) predicted increases in adolescents’ perceptions of peer approval of sexual behavior and/or in their estimates of the numbers of sexually active peers. These perceptions, in turn, predicted increases in adolescents’ level of experience with sexual behavior at the end of the study. Boys’ SNS use also directly predicted increased levels of experience with sexual behavior. These findings highlight the need for multisystemic research and intervention development to promote adolescents’ sexual health. PMID:26086606
Doornwaard, Suzan M; ter Bogt, Tom F M; Reitz, Ellen; van den Eijnden, Regina J J M
2015-01-01
Research on the role of sex-related Internet use in adolescents' sexual development has often isolated the Internet and online behaviors from other, offline influencing factors in adolescents' lives, such as processes in the peer domain. The aim of this study was to test an integrative model explaining how receptive (i.e., use of sexually explicit Internet material [SEIM]) and interactive (i.e., use of social networking sites [SNS]) sex-related online behaviors interrelate with perceived peer norms in predicting adolescents' experience with sexual behavior. Structural equation modeling on longitudinal data from 1,132 Dutch adolescents (M(age) T1 = 13.95; range 11-17; 52.7% boys) demonstrated concurrent, direct, and indirect effects between sex-related online behaviors, perceived peer norms, and experience with sexual behavior. SEIM use (among boys) and SNS use (among boys and girls) predicted increases in adolescents' perceptions of peer approval of sexual behavior and/or in their estimates of the numbers of sexually active peers. These perceptions, in turn, predicted increases in adolescents' level of experience with sexual behavior at the end of the study. Boys' SNS use also directly predicted increased levels of experience with sexual behavior. These findings highlight the need for multisystemic research and intervention development to promote adolescents' sexual health.
Recovery of NORM from scales generated by oil extraction.
Al Attar, Lina; Safia, Bassam; Ghani, Basem Abdul; Al Abdulah, Jamal
2016-03-01
Scales, containing naturally occurring radioactive materials (NORM), are a major problem in oil production that lead to costly remediation and disposal programmes. In view of environmental protection, radio and chemical characterisation is an essential step prior to waste treatment. This study focuses on developing of a protocol to recover (226)Ra and (210)Pb from scales produced by petroleum industry. X-ray diffractograms of the scales indicated the presence of barite-strontium (Ba0.75Sr0.25SO4) and hokutolite (Ba0.69Pb0.31SO4) as main minerals. Quartz, galena and Ca2Al2SiO6(OH)2 or sphalerite and iron oxide were found in minor quantities. Incineration to 600 °C followed by enclosed-digestion and acid-treatment gave complete digestion. Using (133)Ba and (210)Pb tracers as internal standards gave recovery ranged 87-91% for (226)Ra and ca. 100% for (210)Pb. Radium was finally dissolved in concentrated sulphuric acid, while (210)Pb dissolved in the former solution as well as in 8 M nitric acid. Dissolving the scales would provide better estimation of their radionuclides contents, facilitate the determination of their chemical composition, and make it possible to recycle NORM wastes in terms of radionuclides production. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mathuthu, Manny; Kamunda, Caspah; Madhuku, Morgan
2016-06-07
Mining is one of the major causes of elevation of naturally-occurring radionuclide material (NORM) concentrations on the Earth's surface. The aim of this study was to evaluate the human risk associated with exposure to NORMs in soils from mine tailings around a gold mine. A broad-energy germanium detector was used to measure activity concentrations of these NORMs in 66 soil samples (56 from five mine tailings and 10 from the control area). The RESidual RADioactivity (RESRAD) OFFSITE modeling program (version 3.1) was then used to estimate the radiation doses and the cancer morbidity risk of uranium-238 ((238)U), thorium-232 ((232)Th), and potassium-40 ((40)K) for a hypothetical resident scenario. According to RESRAD prediction, the maximum total effective dose equivalent (TEDE) during 100 years was found to be 0.0315 mSv/year at year 30, while the maximum total excess cancer morbidity risk for all the pathways was 3.04 × 10(-5) at year 15. The US Environmental Protection Agency considers acceptable for regulatory purposes a cancer risk in the range of 10(-6) to 10(-4). Therefore, results obtained from RESRAD OFFSITE code has shown that the health risk from gold mine tailings is within acceptable levels according to international standards.
The Role of Cities in Reducing Smoking in China
Redmon, Pamela; Koplan, Jeffrey; Eriksen, Michael; Li, Shuyang; Kean, Wang
2014-01-01
China is the epicenter of the global tobacco epidemic. China grows more tobacco, produces more cigarettes, makes more profits from tobacco and has more smokers than any other nation in the world. Approximately one million smokers in China die annually from diseases caused by smoking, and this estimate is expected to reach over two million by 2020. China cities have a unique opportunity and role to play in leading the tobacco control charge from the “bottom up”. The Emory Global Health Institute—China Tobacco Control Partnership supported 17 cities to establish tobacco control programs aimed at changing social norms for tobacco use. Program assessments showed the Tobacco Free Cities grantees’ progress in establishing tobacco control policies and raising public awareness through policies, programs and education activities have varied from modest to substantial. Lessons learned included the need for training and tailored technical support to build staff capacity and the importance of government and organizational support for tobacco control. Tobacco control, particularly in China, is complex, but the potential for significant public health impact is unparalleled. Cities have a critical role to play in changing social norms of tobacco use, and may be the driving force for social norm change related to tobacco use in China. PMID:25264682
Mathuthu, Manny; Kamunda, Caspah; Madhuku, Morgan
2016-01-01
Mining is one of the major causes of elevation of naturally-occurring radionuclide material (NORM) concentrations on the Earth’s surface. The aim of this study was to evaluate the human risk associated with exposure to NORMs in soils from mine tailings around a gold mine. A broad-energy germanium detector was used to measure activity concentrations of these NORMs in 66 soil samples (56 from five mine tailings and 10 from the control area). The RESidual RADioactivity (RESRAD) OFFSITE modeling program (version 3.1) was then used to estimate the radiation doses and the cancer morbidity risk of uranium-238 (238U), thorium-232 (232Th), and potassium-40 (40K) for a hypothetical resident scenario. According to RESRAD prediction, the maximum total effective dose equivalent (TEDE) during 100 years was found to be 0.0315 mSv/year at year 30, while the maximum total excess cancer morbidity risk for all the pathways was 3.04 × 10−5 at year 15. The US Environmental Protection Agency considers acceptable for regulatory purposes a cancer risk in the range of 10−6 to 10−4. Therefore, results obtained from RESRAD OFFSITE code has shown that the health risk from gold mine tailings is within acceptable levels according to international standards. PMID:27338424
ERIC Educational Resources Information Center
Rule, David L.
Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2014 CFR
2014-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds
NASA Astrophysics Data System (ADS)
Wesemann, Stefan; Marzetta, Thomas L.
2017-12-01
For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Extending the Mertonian Norms: Scientists' Subscription to Norms of Research
ERIC Educational Resources Information Center
Anderson, Melissa S.; Ronning, Emily A.; De Vries, Raymond; Martinson, Brian C.
2010-01-01
This analysis, based on focus groups and a national survey, assesses scientists' subscription to the Mertonian norms of science and associated counternorms. It also supports extension of these norms to governance (as opposed to administration), as a norm of decision-making, and quality (as opposed to quantity), as an evaluative norm. (Contains 1…
Verbeke, Peter; Vermeulen, Gert; Meysman, Michaël; Vander Beken, Tom
2015-01-01
Using the new legal basis provided by the Lisbon Treaty, the Council of the European Union has endorsed the 2009 Procedural Roadmap for strengthening the procedural rights of suspected or accused persons in criminal proceedings. This Roadmap has so far resulted in six measures from which specific procedural minimum standards have been and will be adopted or negotiated. So far, only Measure E directly touches on the specific issue of vulnerable persons. This Measure has recently produced a tentative result through a Commission Recommendation on procedural safeguards for vulnerable persons in criminal proceedings. This contribution aims to discuss the need for the introduction of binding minimum standards throughout Europe to provide additional protection for mentally disordered defendants. The paper will examine whether or not the member states adhere to existing fundamental norms and standards in this context, and whether the application of these norms and standards should be made more uniform. For this purpose, the procedural situation of mentally disordered defendants in Belgium and England and Wales will be thoroughly explored. The research establishes that Belgian law is unsatisfactory in the light of the Strasbourg case law, and that the situation in practice in England and Wales indicates not only that there is justifiable doubt about whether fundamental principles are always adhered to, but also that these principles should become more anchored in everyday practice. It will therefore be argued that there is a need for putting Measure E into practice. The Commission Recommendation, though only suggestive, may serve as a necessary and inspirational vehicle to improve the procedural rights of mentally disordered defendants and to ensure that member states are able to cooperate within the mutual recognition framework without being challenged on the grounds that they are collaborating with peers who do not respect defendants' fundamental fair trial rights. Throughout this contribution the term 'defendant' will be used, and no difference will be made in terminology between suspected and accused persons. This contribution only covers the situation of mentally disordered adult defendants. Copyright © 2015 Elsevier Ltd. All rights reserved.
Children are sensitive to norms of giving.
McAuliffe, Katherine; Raihani, Nichola J; Dunham, Yarrow
2017-10-01
People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were told how much other children give (descriptive norm) or what they should give according to an adult (injunctive norm). Results showed that children generally gave more when they were exposed to a generous norm. However, patterns of compliance varied with age. Younger children were more likely to comply with the selfish norm, suggesting a licensing effect. By contrast, older children were more influenced by the generous norm, yet capped their donations at 50%, perhaps adhering to a pre-existing norm of equality. Children were not differentially influenced by descriptive or injunctive norms, suggesting a primacy of norm content over norm format. Together, our findings indicate that while generosity is malleable in children, normative information does not completely override pre-existing biases. Copyright © 2017 Elsevier B.V. All rights reserved.
Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed
NASA Astrophysics Data System (ADS)
Carrasco, V. M. S.; Vaquero, J. M.
2016-11-01
We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.
Evseeva, T; Belykh, E; Geras'kin, S; Majstrenko, T
2012-07-01
In spite of the long history of the research, radioactive contamination of the Semipalatinsk nuclear test site (SNTS) in the Republic of Kazakhstan has not been adequately characterized. Our cartographic investigation has demonstrated highly variable radioactive contamination of the SNTS. The Cs-137, Sr-90, Eu-152, Eu-154, Co-60, and Am-241 activity concentrations in soil samples from the "Balapan" site were 42.6-17646, 96-18250, 1.05-11222, 0.6-4865, 0.23-4893, and 1.2-1037 Bq kg(-1), correspondingly. Cs-137 and Sr-90 activity concentrations in soil samples from the "Experimental field" site were varied from 87 up to 400 and from 94 up to 1000 Bq kg(-1), respectively. Activity concentrations of Co-60, Eu-152, and Eu-154 were lower than the minimum detectable activity of the method used. Concentrations of naturally occurring radionuclides (K-40, Ra-226, U-238, and Th-232) in the majority of soil samples from the "Balapan" and the "Experimental field" sites did not exceed typical for surrounding of the SNTS areas levels. Estimation of risks associated with radioactive contamination based on the IAEA clearance levels for a number of key radionuclides in solid materials shows that soils sampled from the "Balapan" and the "Experimental field" sites might be considered as radioactive wastes. Decrease in specific activity of soil from the sites studied up to safety levels due to Co-60, Cs-137, Sr-90, Eu-152, Eu-154 radioactive decay and Am-241 accumulation-decay will occur not earlier than 100 years. In contrast, soils from the "Experimental field" and the "Balapan" sites (except 0.5-2.5 km distance from the "Chagan" explosion point) cannot be regarded as the radioactive wastes according safety norms valid in Russia and Kazakhstan. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Lee, Hyegyu; Paek, Hye-Jin
2013-01-01
Objective: To examine how norm appeals and guilt influence smokers' behavioural intention. Design: Quasi-experimental design. Setting: South Korea. Method: Two hundred and fifty-five male smokers were randomly assigned to descriptive, injunctive, or subjective anti-smoking norm messages. After they viewed the norm messages, their norm perceptions,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirro, G.A.
1997-02-01
This paper presents an overview of issues related to handling NORM materials, and provides a description of a facility designed for the processing of NORM contaminated equipment. With regard to handling NORM materials the author discusses sources of NORM, problems, regulations and disposal options, potential hazards, safety equipment, and issues related to personnel protection. For the facility, the author discusses: description of the permanent facility; the operations of the facility; the license it has for handling specific radioactive material; operating and safety procedures; decontamination facilities on site; NORM waste processing capabilities; and offsite NORM services which are available.
A case study on the formation and sharing process of science classroom norms
NASA Astrophysics Data System (ADS)
Chang, Jina; Song, Jinwoong
2016-03-01
The teaching and learning of science in school are influenced by various factors, including both individual factors, such as member beliefs, and social factors, such as the power structure of the class. To understand this complex context affected by various factors in schools, we investigated the formation and sharing process of science classroom norms in connection with these factors. By examining the developmental process of science classroom norms, we identified how the norms were realized, shared, and internalized among the members. We collected data through classroom observations and interviews focusing on two elementary science classrooms in Korea. From these data, factors influencing norm formation were extracted and developed as stories about norm establishment. The results indicate that every science classroom norm was established, shared, and internalized differently according to the values ingrained in the norms, the agent of norm formation, and the members' understanding about the norm itself. The desirable norms originating from values in science education, such as having an inquiring mind, were not established spontaneously by students, but were instead established through well-organized norm networks to encourage concrete practice. Educational implications were discussed in terms of the practice of school science inquiry, cultural studies, and value-oriented education.
Assessment of pre-injury health-related quality of life: a systematic review.
Scholten, Annemieke C; Haagsma, Juanita A; Steyerberg, Ewout W; van Beeck, Ed F; Polinder, Suzanne
2017-03-14
Insight into the change from pre- to post-injury health-related quality of life (HRQL) of trauma patients is important to derive estimates of the impact of injury on HRQL. Prospectively collected pre-injury HRQL data are, however, often not available due to the difficulty to collect these data before the injury. We performed a systematic review on the current methods used to assess pre-injury health status and to estimate the change from pre- to post-injury HRQL due to an injury. A systematic literature search was conducted in EMBASE, MEDLINE, and other databases. We identified studies that reported on the pre-injury HRQL of trauma patients. Articles were collated by type of injury and HRQL instrument used. Reported pre-injury HRQL scores were compared with general age- and gender-adjusted norms for the EQ-5D, SF-36, and SF-12. We retrieved results from 31 eligible studies, described in 41 publications. All but two studies used retrospective assessment and asked patients to recall their pre-injury HRQL, showing widely varying timings of assessments (soon after injury up to years after injury). These studies commonly applied the SF-36 (n = 13), EQ-5D (n = 9), or SF-12 (n = 3) using questionnaires (n = 14) or face-to-face interviews (n = 11). Two studies reported prospective pre-injury assessment, based on prospective longitudinal cohort studies from a sample of initially non-injured patients, and applied questionnaires using the SF-36 or SF-12. The recalled pre-injury HRQL scores of injury patients consistently exceeded age- and sex-adjusted population norms, except in a limited number of studies on injury types of higher severity (e.g., traumatic brain injury and hip fractures). All studies reported reduced post-injury HRQL compared to pre-injury HRQL. Both prospective studies reported that patients had recovered to their pre-injury levels of physical and mental health, while in all but one retrospective study patients did not regain the reported pre-injury levels of HRQL, even years after injury. So far, primarily retrospective research has been conducted to assess pre-injury HRQL. This research shows consistently higher pre-injury HRQL scores than population norms and a recovery that lags behind that of prospective assessments, implying a systematic overestimation of the change in HRQL from pre- to post-injury due to an injury. More prospective research is necessary to examine the effect of recall bias and response shift. Researchers should be aware of the bias that may arise when pre-injury HRQL is assessed retrospectively or when population norms are applied, and should use prospectively derived HRQL scores wherever possible to estimate the impact of injury on HRQL.
Jacobson, Ryan P; Mortensen, Chad R; Cialdini, Robert B
2011-03-01
The authors suggest that injunctive and descriptive social norms engage different psychological response tendencies when made selectively salient. On the basis of suggestions derived from the focus theory of normative conduct and from consideration of the norms' functions in social life, the authors hypothesized that the 2 norms would be cognitively associated with different goals, would lead individuals to focus on different aspects of self, and would stimulate different levels of conflict over conformity decisions. Additionally, a unique role for effortful self-regulation was hypothesized for each type of norm-used as a means to resist conformity to descriptive norms but as a means to facilitate conformity for injunctive norms. Four experiments supported these hypotheses. Experiment 1 demonstrated differences in the norms' associations to the goals of making accurate/efficient decisions and gaining/maintaining social approval. Experiment 2 provided evidence that injunctive norms lead to a more interpersonally oriented form of self-awareness and to a greater feeling of conflict about conformity decisions than descriptive norms. In the final 2 experiments, conducted in the lab (Experiment 3) and in a naturalistic environment (Experiment 4), self-regulatory depletion decreased conformity to an injunctive norm (Experiments 3 and 4) and increased conformity to a descriptive norm (Experiment 4)-even though the norms advocated identical behaviors. By illustrating differentiated response tendencies for each type of social norm, this research provides new and converging support for the focus theory of normative conduct. (c) 2011 APA, all rights reserved
ERIC Educational Resources Information Center
Gorgorio, Nuria; Planas, Nuria
2005-01-01
Starting from the constructs "cultural scripts" and "social representations", and on the basis of the empirical research we have been developing until now, we revisit the construct norms from a sociocultural perspective. Norms, both sociomathematical norms and norms of the mathematical practice, as cultural scripts influenced…
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Dai, Ming; Yin, Chuan-li
2017-09-01
Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.
ERIC Educational Resources Information Center
McGuire, Luke; Rutland, Adam; Nesdale, Drew
2015-01-01
The present study examined the interactive effects of school norms, peer norms, and accountability on children's intergroup attitudes. Participants (n = 229) aged 5-11 years, in a between-subjects design, were randomly assigned to a peer group with an inclusion or exclusion norm, learned their school either had an inclusion norm or not, and were…
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Direct discontinuous Galerkin method and its variations for second order elliptic equations
Huang, Hongying; Chen, Zheng; Li, Jin; ...
2016-08-23
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
Factors determining water treatment behavior for the prevention of cholera in Chad.
Lilje, Jonathan; Kessely, Hamit; Mosler, Hans-Joachim
2015-07-01
Cholera is a well-known and feared disease in developing countries, and is linked to high rates of morbidity and mortality. Contaminated drinking water and the lack of sufficient treatment are two of the key causes of high transmission rates. This article presents a representative health survey performed in Chad to inform future intervention strategies in the prevention and control of cholera. To identify critical psychological factors for behavior change, structured household interviews were administered to N = 1,017 primary caregivers, assessing their thoughts and attitudes toward household water treatment according to the Risk, Attitude, Norm, Ability, and Self-regulation model. The intervention potential for each factor was estimated by analyzing differences in means between groups of current performers and nonperformers of water treatment. Personal risk evaluation for diarrheal diseases and particularly for cholera was very low among the study population. Likewise, the perception of social norms was found to be rather unfavorable for water treatment behaviors. In addition, self-reported ability estimates (self-efficacy) revealed some potential for intervention. A mass radio campaign is proposed, using information and normative behavior change techniques, in combination with community meetings focused on targeting abilities and personal commitment to water treatment. © The American Society of Tropical Medicine and Hygiene.
Direct discontinuous Galerkin method and its variations for second order elliptic equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Hongying; Chen, Zheng; Li, Jin
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
ENHANCED RADIOACTIVE CONTENT OF 'BALANCE' BRACELETS.
Tsroya, S; Pelled, O; Abraham, A; Kravchik, T; German, U
2016-09-01
During a routine whole body counting measurement of a worker at the Nuclear Research Center Negev, abnormal activities of (232)Th and (238)U were measured. After a thorough investigation, it was found that the radioactivity was due to a rubber bracelet ('balance bracelet') worn by the worker during the measurement. The bracelet was counted directly by an high pure germanium gamma spectrometry system, and the specific activities determined were 10.80 ± 1.37 Bq g(-1) for (232)Th and 5.68 ± 0.88 Bq g(-1) for natural uranium. These values are obviously high compared with normally occurring radioactive material (NORM) average values. The dose rate to the wrist surface was estimated to be ∼3.9 µGy h(-1) and ∼34 mGy for a whole year. The dose rate at the centre of the wrist was estimated to be ∼2.4 µGy h(-1) and ∼21 mGy for a whole year. The present findings stresses a more general issue, as synthetic rubber and silicone products are common and widely used, but their radioactivity content is mostly uncontrolled, thus causing unjustified exposure due to enhanced NORM radioactivity levels. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Flying After Conducting an Aircraft Excessive Cabin Leakage Test.
Houston, Stephen; Wilkinson, Elizabeth
2016-09-01
Aviation medical specialists should be aware that commercial airline aircraft engineers may undertake a 'dive equivalent' operation while conducting maintenance activities on the ground. We present a worked example of an occupational risk assessment to determine a minimum safe preflight surface interval (PFSI) for an engineer before flying home to base after conducting an Excessive Cabin Leakage Test (ECLT) on an unserviceable aircraft overseas. We use published dive tables to determine the minimum safe PFSI. The estimated maximum depth acquired during the procedure varies between 10 and 20 fsw and the typical estimated bottom time varies between 26 and 53 min for the aircraft types operated by the airline. Published dive tables suggest that no minimum PFSI is required for such a dive profile. Diving tables suggest that no minimum PFSI is required for the typical ECLT dive profile within the airline; however, having conducted a risk assessment, which considered peak altitude exposure during commercial flight, the worst-case scenario test dive profile, the variability of interindividual inert gas retention, and our existing policy among other occupational groups within the airline, we advised that, in the absence of a bespoke assessment of the particular circumstances on the day, the minimum PFSI after conducting ECLT should be 24 h. Houston S, Wilkinson E. Flying after conducting an aircraft excessive cabin leakage test. Aerosp Med Hum Perform. 2016; 87(9):816-820.
Current Trends in the study of Gender Norms and Health Behaviors
Fleming, Paul J.; Agnew-Brune, Christine
2015-01-01
Gender norms are recognized as one of the major social determinants of health and gender norms can have implications for an individual’s health behaviors. This paper reviews the recent advances in research on the role of gender norms on health behaviors most associated with morbidity and mortality. We find that (1) the study of gender norms and health behaviors is varied across different types of health behaviors, (2) research on masculinity and masculine norms appears to have taken on an increasing proportion of studies on the relationship between gender norms and health, and (3) we are seeing new and varied populations integrated into the study of gender norms and health behaviors. PMID:26075291
Liu, Mingying; Jiang, Jing; Han, Xiaojiao; Qiao, Guirong; Zhuo, Renying
2014-01-01
Dendrocalamus latiflorus Munro distributes widely in subtropical areas and plays vital roles as valuable natural resources. The transcriptome sequencing for D. latiflorus Munro has been performed and numerous genes especially those predicted to be unique to D. latiflorus Munro were revealed. qRT-PCR has become a feasible approach to uncover gene expression profiling, and the accuracy and reliability of the results obtained depends upon the proper selection of stable reference genes for accurate normalization. Therefore, a set of suitable internal controls should be validated for D. latiflorus Munro. In this report, twelve candidate reference genes were selected and the assessment of gene expression stability was performed in ten tissue samples and four leaf samples from seedlings and anther-regenerated plants of different ploidy. The PCR amplification efficiency was estimated, and the candidate genes were ranked according to their expression stability using three software packages: geNorm, NormFinder and Bestkeeper. GAPDH and EF1α were characterized to be the most stable genes among different tissues or in all the sample pools, while CYP showed low expression stability. RPL3 had the optimal performance among four leaf samples. The application of verified reference genes was illustrated by analyzing ferritin and laccase expression profiles among different experimental sets. The analysis revealed the biological variation in ferritin and laccase transcript expression among the tissues studied and the individual plants. geNorm, NormFinder, and BestKeeper analyses recommended different suitable reference gene(s) for normalization according to the experimental sets. GAPDH and EF1α had the highest expression stability across different tissues and RPL3 for the other sample set. This study emphasizes the importance of validating superior reference genes for qRT-PCR analysis to accurately normalize gene expression of D. latiflorus Munro.
Baron-Epel, Orna; Obid, Samira; Fertig, Shahar; Gitelman, Victoria
2016-01-01
Involvement in car crashes is higher among Israeli Arabs compared to Jews. This study characterized perceived descriptive driving norms (PDDNs) within and outside Arab towns/villages and estimated their association with involvement in car crashes. Arab drivers (594) living in 19 towns and villages were interviewed in face-to-face interviews. The questionnaire included questions about involvement in car crashes, PDDNs within and outside the towns/villages, attitudes toward traffic safety laws, traffic law violations, and socioeconomic and demographic variables. PDDNs represent individuals' perceptions on how safe other people typically drive. The low scores indicate a low percentage of drivers performing unsafe behaviors (safer driving-related norms). A structural equation modeling analysis was applied to identify factors associated with PDDNs and involvement in car crashes. A large difference was found in PDDNs within and outside the towns/villages. Mostly, the respondents reported higher rates of unsafe PDDNs within the towns/villages (mean = 3.76, SD = 0.63) and lower rates of PDDNs outside the towns/villages (mean = 2.12, SD = 0.60). PDDNs outside the towns/villages were associated with involvement in a car crash (r = -0.12, P <.01), but those within the towns/villages were not. Within the towns/villages, attitudes toward traffic laws and PDDNs were positively associated with traffic law violations (r = 0.56, P <.001; r = 0.11, P <.001 respectively), where traffic law violations were directly associated with involvement in a car crash (r = -0.14, P <.001). Unsafe PDDNs may add directly and indirectly to unsafe driving and involvement in car crashes in Arab Israelis. Because PDDNs outside towns/villages were better, increased law enforcement within towns/villages may improve these norms and decrease involvement in car crashes.
Solberg, Monica Favnebøe; Skaala, Øystein; Nilsen, Frank; Glover, Kevin Alan
2013-01-01
One of the most important traits linked with the successful domestication of animals is reducing their sensitivity to environmental stressors in the human controlled environment. In order to examine whether domestication selection in Atlantic salmon Salmo salar L., over approximately ten generations, has inadvertently selected for reduced responsiveness to stress, we compared the growth reaction norms of 29 wild, hybrid and domesticated families reared together under standard hatchery conditions (control) and in the presence of a stressor (reduced water level twice daily). The experiment was conducted for a 14 week period. Farmed salmon outgrew wild salmon 1∶2.93 in the control tanks, and no overlap in mean weight was displayed between families representing the three groups. Thus, the elevation of the reaction norms differed among the groups. Overall, growth was approximately 25% lower in the stressed tanksl; however, farmed salmon outgrew wild salmon 1∶3.42 under these conditions. That farmed salmon maintained a relatively higher growth rate than the wild salmon in the stressed tanks demonstrates a lower responsiveness to stress in the farmed salmon. Thus, flatter reaction norm slopes were displayed in the farmed salmon, demonstrating reduced plasticity for this trait under these specific experimental conditions. For all growth measurements, hybrid salmon displayed intermediate values. Wild salmon displayed higher heritability estimates for body weight than the hybrid and farmed salmon in both environments. This suggests reduced genetic variation for body weight in the farmed contra wild salmon studied here. While these results may be linked to the specific families and stocks investigated, and verification in other stocks and traits is needed, these data are consistent with the theoretical predictions of domestication. PMID:23382901
Low-flow characteristics of streams in Ohio through water year 1997
Straub, David E.
2001-01-01
This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).
The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents.
Quigley, Jody; Rasmussen, Susan; McAlaney, John
2017-03-15
Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals' engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman's ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants' reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends' norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered.
The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents
Quigley, Jody; Rasmussen, Susan; McAlaney, John
2017-01-01
Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals’ engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman’s ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants’ reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends’ norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered. PMID:28294999
Performance of Dutch children on the Bayley III: a comparison study of US and Dutch norms.
Steenis, Leonie J P; Verhoeven, Marjolein; Hessen, Dave J; van Baar, Anneloes L
2015-01-01
The Bayley Scales of Infant and Toddler Development-third edition (Bayley-III) are frequently used to assess early child development worldwide. However, the original standardization only included US children, and it is still unclear whether or not these norms are adequate for use in other populations. Recently, norms for the Dutch version of the Bayley-III (The Bayley-III-NL) were made. Scores based on Dutch and US norms were compared to study the need for population-specific norms. Scaled scores based on Dutch and US norms were compared for 1912 children between 14 days and 42 months 14 days. Next, the proportions of children scoring < 1-SD and < -2 SD based on the two norms were compared, to identify over- or under-referral for developmental delay resulting from non-population-based norms. Scaled scores based on Dutch norms fluctuated around values based on US norms on all subtests. The extent of the deviations differed across ages and subtests. Differences in means were significant across all five subtests (p < .01) with small to large effect sizes (ηp2) ranging from .03 to .26). Using the US instead of Dutch norms resulted in over-referral regarding gross motor skills, and under-referral regarding cognitive, receptive communication, expressive communication, and fine motor skills. The Dutch norms differ from the US norms for all subtests and these differences are clinically relevant. Population specific norms are needed to identify children with low scores for referral and intervention, and to facilitate international comparisons of population data.
Emergence and Evolution of Cooperation Under Resource Pressure
Pereda, María; Zurro, Débora; Santos, José I.; Briz i Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M.
2017-01-01
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies. PMID:28362000
Which patients do I treat? An experimental study with economists and physicians
2012-01-01
This experiment investigates decisions made by prospective economists and physicians in an allocation problem which can be framed either medically or neutrally. The potential recipients differ with respect to their minimum needs as well as to how much they benefit from a treatment. We classify the allocators as either 'selfish', 'Rawlsian', or 'maximizing the number of recipients'. Economists tend to maximize their own payoff, whereas the physicians' choices are more in line with maximizing the number of recipients and with Rawlsianism. Regarding the framing, we observe that professional norms surface more clearly in familiar settings. Finally, we scrutinize how the probability of being served and the allocated quantity depend on a recipient's characteristics as well as on the allocator type. JEL Classification: A13, I19, C91, C72 PMID:22827912
Emergence and Evolution of Cooperation Under Resource Pressure.
Pereda, María; Zurro, Débora; Santos, José I; Briz I Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M
2017-03-31
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies.
Computation of Optimal Actuator/Sensor Locations
2013-12-26
weighting matrices Q = I and R = 0.01, and a minimum variance LQ-cost (with V = I ), a plot of the L2 norm of the control signal versus actuator...0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0.25 actuator location lin ea r− qu ad ra tic c os t ( re la tiv e) Q = I , R = 100 Q... I , R = 1 Q = I , R = 0.01 Q = I , R = 0.0001 (a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 actuator location lin
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Emergence and Evolution of Cooperation Under Resource Pressure
NASA Astrophysics Data System (ADS)
Pereda, María; Zurro, Débora; Santos, José I.; Briz I Godino, Ivan; Álvarez, Myrian; Caro, Jorge; Galán, José M.
2017-03-01
We study the influence that resource availability has on cooperation in the context of hunter-gatherer societies. This paper proposes a model based on archaeological and ethnographic research on resource stress episodes, which exposes three different cooperative regimes according to the relationship between resource availability in the environment and population size. The most interesting regime represents moderate survival stress in which individuals coordinate in an evolutionary way to increase the probabilities of survival and reduce the risk of failing to meet the minimum needs for survival. Populations self-organise in an indirect reciprocity system in which the norm that emerges is to share the part of the resource that is not strictly necessary for survival, thereby collectively lowering the chances of starving. Our findings shed further light on the emergence and evolution of cooperation in hunter-gatherer societies.