Sample records for minimum norm solution

  1. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  2. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2010-09-01

    regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the

  3. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  4. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  5. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    PubMed

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  6. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  7. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  8. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  9. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  10. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  11. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  12. Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.

    PubMed

    Li, Yuanqing; Amari, Shun-Ichi

    2010-07-01

    In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.

  13. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  14. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    PubMed Central

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  15. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  16. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    PubMed

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  17. A linear programming approach to characterizing norm bounded uncertainty from experimental data

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.; Bayard, D. S.; Yam, Y.

    1991-01-01

    The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).

  18. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  19. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  20. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  1. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  2. Using Norm-Referenced Data to Set Standards for a Minimum Competency Program in the State of South Carolina.

    ERIC Educational Resources Information Center

    Garcia-Quintana, Roan A.; Mappus, M. Lynne

    1980-01-01

    Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)

  3. Chromotomography for a rotating-prism instrument using backprojection, then filtering.

    PubMed

    Deming, Ross W

    2006-08-01

    A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.

  4. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  5. New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Wach, Łukasz

    2017-10-01

    In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.

  6. Cops or Robbers — a Bistable Society

    NASA Astrophysics Data System (ADS)

    Kułakowski, K.

    The norm game described by Axelrod in 1985 was recently treated with the master equation formalism. Here we discuss the equations, where (i) those who break the norm cannot punish and those who punish cannot break the norm, (ii) the tendency to punish is suppressed if the majority breaks the norm. The second mechanism is new. For some values of the parameters the solution shows the saddle-point bifurcation. Then, two stable solutions are possible, where the majority breaks the norm or the majority punishes. This means, that the norm breaking can be discontinuous, when measured in the social scale. The bistable character is reproduced also with new computer simulations on the Erdös-Rényi directed network.

  7. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  8. On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    1997-01-01

    In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.

  9. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  10. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  11. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  12. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  14. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  15. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khosla, D.; Singh, M.

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  18. Lower Bounds for Possible Singular Solutions for the Navier-Stokes and Euler Equations Revisited

    NASA Astrophysics Data System (ADS)

    Cortissoz, Jean C.; Montero, Julio A.

    2018-03-01

    In this paper we give optimal lower bounds for the blow-up rate of the \\dot{H}s( T^3) -norm, 1/25/2.

  19. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  20. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  1. EEG-distributed inverse solutions for a spherical head model

    NASA Astrophysics Data System (ADS)

    Riera, J. J.; Fuentes, M. E.; Valdés, P. A.; Ohárriz, Y.

    1998-08-01

    The theoretical study of the minimum norm solution to the MEG inverse problem has been carried out in previous papers for the particular case of spherical symmetry. However, a similar study for the EEG is remarkably more difficult due to the very complicated nature of the expression relating the voltage differences on the scalp to the primary current density (PCD) even for this simple symmetry. This paper introduces the use of the electric lead field (ELF) on the dyadic formalism in the spherical coordinate system to overcome such a drawback using an expansion of the ELF in terms of longitudinal and orthogonal vector fields. This approach allows us to represent EEG Fourier coefficients on a 2-sphere in terms of a current multipole expansion. The choice of a suitable basis for the Hilbert space of the PCDs on the brain region allows the current multipole moments to be related by spatial transfer functions to the PCD spectral coefficients. Properties of the most used distributed inverse solutions are explored on the basis of these results. Also, a part of the ELF null space is completely characterized and those spherical components of the PCD which are possible silent candidates are discussed.

  2. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  3. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less

  4. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  5. Developing Uncertainty Models for Robust Flutter Analysis Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Potter, Starr; Lind, Rick; Kehoe, Michael W. (Technical Monitor)

    2001-01-01

    A ground vibration test can be used to obtain information about structural dynamics that is important for flutter analysis. Traditionally, this information#such as natural frequencies of modes#is used to update analytical models used to predict flutter speeds. The ground vibration test can also be used to obtain uncertainty models, such as natural frequencies and their associated variations, that can update analytical models for the purpose of predicting robust flutter speeds. Analyzing test data using the -norm, rather than the traditional 2-norm, is shown to lead to a minimum-size uncertainty description and, consequently, a least-conservative robust flutter speed. This approach is demonstrated using ground vibration test data for the Aerostructures Test Wing. Different norms are used to formulate uncertainty models and their associated robust flutter speeds to evaluate which norm is least conservative.

  6. A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.

    PubMed

    Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin

    2018-07-01

    Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  8. Input relegation control for gross motion of a kinematically redundant manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  9. Application of generalized singular value decomposition to ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Bhuyan, K.; Singh, S.; Bhuyan, P.

    2004-10-01

    The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD) based algorithm. Model ionospheric total electron content (TEC) data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA), in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.

  10. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    NASA Astrophysics Data System (ADS)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  11. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  12. Sparse EEG/MEG source estimation via a group lasso

    PubMed Central

    Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor

    2017-01-01

    Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790

  13. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  14. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  15. Space-time derivative estimates of the Koch-Tataru solutions to the nematic liquid crystal system in Besov spaces

    NASA Astrophysics Data System (ADS)

    Liu, Qiao

    2015-06-01

    In recent paper [7], Y. Du and K. Wang (2013) proved that the global-in-time Koch-Tataru type solution (u, d) to the n-dimensional incompressible nematic liquid crystal flow with small initial data (u0, d0) in BMO-1 × BMO has arbitrary space-time derivative estimates in the so-called Koch-Tataru space norms. The purpose of this paper is to show that the Koch-Tataru type solution satisfies the decay estimates for any space-time derivative involving some borderline Besov space norms.

  16. Application of functional analysis to perturbation theory of differential equations. [nonlinear perturbation of the harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.; Bond, V. B.

    1980-01-01

    The deviation of the solution of the differential equation y' = f(t, y), y(O) = y sub O from the solution of the perturbed system z' = f(t, z) + g(t, z), z(O) = z sub O was investigated for the case where f and g are continuous functions on I x R sup n into R sup n, where I = (o, a) or I = (o, infinity). These functions are assumed to satisfy the Lipschitz condition in the variable z. The space Lip(I) of all such functions with suitable norms forms a Banach space. By introducing a suitable norm in the space of continuous functions C(I), introducing the problem can be reduced to an equivalent problem in terminology of operators in such spaces. A theorem on existence and uniqueness of the solution is presented by means of Banach space technique. Norm estimates on the rate of growth of such solutions are found. As a consequence, estimates of deviation of a solution due to perturbation are obtained. Continuity of the solution on the initial data and on the perturbation is established. A nonlinear perturbation of the harmonic oscillator is considered a perturbation of equations of the restricted three body problem linearized at libration point.

  17. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  18. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  19. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  20. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  1. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  2. On the mass concentration of L^2-constrained minimizers for a class of Schrödinger-Poisson equations

    NASA Astrophysics Data System (ADS)

    Ye, Hongyu; Luo, Tingjian

    2018-06-01

    In this paper, we study the mass concentration behavior of positive solutions with prescribed L^2-norm for a class of Schrödinger-Poisson equations in R^3 -Δ u-μ u+φ _uu-|u|^{p-2}u=0, &{} x\\in R^3, μ \\in R, -Δ φ _u=|u|^2, where p\\in (2,6). We show that positive solutions with prescribed L^2-norm as which tends to 0 (in some cases) or to + ∞ (in others), behave like the positive solution of Schrödinger equation -Δ u+u=|u|^{p-2}u in R^3.

  3. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  4. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  5. Variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le

    2018-01-01

    The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.

  6. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  7. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  8. On the structure of critical energy levels for the cubic focusing NLS on star graphs

    NASA Astrophysics Data System (ADS)

    Adami, Riccardo; Cacciapuoti, Claudio; Finco, Domenico; Noja, Diego

    2012-05-01

    We provide information on a non-trivial structure of phase space of the cubic nonlinear Schrödinger (NLS) on a three-edge star graph. We prove that, in contrast to the case of the standard NLS on the line, the energy associated with the cubic focusing Schrödinger equation on the three-edge star graph with a free (Kirchhoff) vertex does not attain a minimum value on any sphere of constant L2-norm. We moreover show that the only stationary state with prescribed L2-norm is indeed a saddle point.

  9. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  10. The SALT NORM : a quantitative chemical-mineralogical characterization of natural waters

    USGS Publications Warehouse

    Bodine, Marc W.; Jones, Blair F.

    1986-01-01

    The new computer program SNORM calculates the salt norm from the chemical composition of a natural water. The salt norm is the quantitative ideal equilibrium assemblage that would crystallize if the water evaporated to dryness at 25 C and 1 bar pressure under atmospheric partial pressure of CO2. SNORM proportions solute concentrations to achieve charge balance. It quantitatively distributes the 18 acceptable solutes into normative salts that are assigned from 63 possible normative salts to allow only stable associations based on the Gibbs Phase Rule, available free energy values, and observed low-temperature mineral associations. Although most natural water compositions represent multiple solute origins, results from SNORM identify three major categories: meteoric or weathering waters that are characterized by normative alkali-bearing sulfate and carbonate salts: connate marine-like waters that are chloride-rich with a halite-bischofite-carnallite-kieserite-anhydrite association; and diagenetic waters that are frequently of marine origin but yield normative salts, such as Ca-bearing chlorides (antarcticite and tachyhydrite) and sylvite, which suggest solute alteration by secondary mineral reactions. The solute source or reaction process within each of the above categories is commonly indicated by the presence or absence of diagnostic normative salts and their relative abundance in the normative salt assemblage. For example, salt norms: (1) may identify lithologic source; (2) may identify the relative roles of carbonic and sulfuric acid hydrolysis in the evolution of weathering waters; (3) may identify the origin of connate water from normal marine, hypersaline, or evaporite salt resolution processes; and (4) may distinguish between dolomitization and silicate hydrolysis or exchange for the origin of diagenetic waters. (Author 's abstract)

  11. On Hilbert-Schmidt norm convergence of Galerkin approximation for operator Riccati equations

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation framework for the solution of operator algebraic Riccati equations is developed. The approach taken is based on a formulation of the Riccati equation as an abstract nonlinear operator equation on the space of Hilbert-Schmidt operators. Hilbert-Schmidt norm convergence of solutions to generic finite dimensional Galerkin approximations to the Riccati equation to the solution of the original infinite dimensional problem is argued. The application of the general theory is illustrated via an operator Riccati equation arising in the linear-quadratic design of an optimal feedback control law for a 1-D heat/diffusion equation. Numerical results demonstrating the convergence of the associated Hilbert-Schmidt kernels are included.

  12. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  13. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  14. Global Well-Posedness of the Boltzmann Equation with Large Amplitude Initial Data

    NASA Astrophysics Data System (ADS)

    Duan, Renjun; Huang, Feimin; Wang, Yong; Yang, Tong

    2017-07-01

    The global well-posedness of the Boltzmann equation with initial data of large amplitude has remained a long-standing open problem. In this paper, by developing a new {L^∞_xL^1v\\cap L^∞_{x,v}} approach, we prove the global existence and uniqueness of mild solutions to the Boltzmann equation in the whole space or torus for a class of initial data with bounded velocity-weighted {L^∞} norm under some smallness condition on the {L^1_xL^∞_v} norm as well as defect mass, energy and entropy so that the initial data allow large amplitude oscillations. Both the hard and soft potentials with angular cut-off are considered, and the large time behavior of solutions in the {L^∞_{x,v}} norm with explicit rates of convergence are also studied.

  15. Injunctive Norms and Alcohol Consumption: A Revised Conceptualization

    PubMed Central

    Krieger, Heather; Neighbors, Clayton; Lewis, Melissa A.; LaBrie, Joseph W.; Foster, Dawn W.; Larimer, Mary E.

    2016-01-01

    Background Injunctive norms have been found to be important predictors of behaviors in many disciplines with the exception of alcohol research. This exception is likely due to a misconceptualization of injunctive norms for alcohol consumption. To address this, we outline and test a new conceptualization of injunctive norms and personal approval for alcohol consumption. Traditionally, injunctive norms have been assessed using Likert scale ratings of approval perceptions, whereas descriptive norms and individual behaviors are typically measured with behavioral estimates (i.e., number of drinks consumed per week, frequency of drinking, etc.). This makes comparisons between these constructs difficult because they are not similar conceptualizations of drinking behaviors. The present research evaluated a new representation of injunctive norms with anchors comparable to descriptive norms measures. Methods A study and a replication were conducted including 2,559 and 1,189 undergraduate students from three different universities. Participants reported on their alcohol-related consumption behaviors, personal approval of drinking, and descriptive and injunctive norms. Personal approval and injunctive norms were measured using both traditional measures and a new drink-based measure. Results Results from both studies indicated that drink-based injunctive norms were uniquely and positively associated with drinking whereas traditionally assessed injunctive norms were negatively associated with drinking. Analyses also revealed significant unique associations between drink-based injunctive norms and personal approval when controlling for descriptive norms. Conclusions These findings provide support for a modified conceptualization of personal approval and injunctive norms related to alcohol consumption and, importantly, offers an explanation and practical solution for the small and inconsistent findings related to injunctive norms and drinking in past studies. PMID:27030295

  16. Potential estimates for the p-Laplace system with data in divergence form

    NASA Astrophysics Data System (ADS)

    Cianchi, A.; Schwarzacher, S.

    2018-07-01

    A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.

  17. The dimensional salience solution to the expectancy-value muddle: an extension.

    PubMed

    Newton, Joshua D; Newton, Fiona J; Ewing, Michael T

    2014-01-01

    The theory of reasoned action (TRA) specifies a set of expectancy-value, belief-based frameworks that underpin attitude (behavioural beliefs × outcome evaluations) and subjective norm (normative beliefs × motivation to comply). Unfortunately, the most common method for analysing these frameworks generates statistically uninterpretable findings, resulting in what has been termed the 'expectancy-value muddle'. Recently, however, a dimensional salience approach was found to resolve this muddle for the belief-based framework underpinning attitude. An online survey of 262 participants was therefore conducted to determine whether the dimensional salience approach could also be applied to the belief-based framework underpinning subjective norm. Results revealed that motivations to comply were greater for salient, as opposed to non-salient, social referents. The belief-based framework underpinning subjective norm was therefore represented by evaluating normative belief ratings for salient social referents. This modified framework was found to predict subjective norm, although predictions were greater when participants were forced to select five salient social referents rather than being free to select any number of social referents. These findings validate the use of the dimensional salience approach for examining the belief-based frameworks underpinning subjective norm. As such, this approach provides a complete solution to addressing the expectancy-value muddle in the TRA.

  18. Adaptation to an extraordinary environment by evolution of phenotypic plasticity and genetic assimilation.

    PubMed

    Lande, Russell

    2009-07-01

    Adaptation to a sudden extreme change in environment, beyond the usual range of background environmental fluctuations, is analysed using a quantitative genetic model of phenotypic plasticity. Generations are discrete, with time lag tau between a critical period for environmental influence on individual development and natural selection on adult phenotypes. The optimum phenotype, and genotypic norms of reaction, are linear functions of the environment. Reaction norm elevation and slope (plasticity) vary among genotypes. Initially, in the average background environment, the character is canalized with minimum genetic and phenotypic variance, and no correlation between reaction norm elevation and slope. The optimal plasticity is proportional to the predictability of environmental fluctuations over time lag tau. During the first generation in the new environment the mean fitness suddenly drops and the mean phenotype jumps towards the new optimum phenotype by plasticity. Subsequent adaptation occurs in two phases. Rapid evolution of increased plasticity allows the mean phenotype to closely approach the new optimum. The new phenotype then undergoes slow genetic assimilation, with reduction in plasticity compensated by genetic evolution of reaction norm elevation in the original environment.

  19. Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    PubMed Central

    Rahimi, Azar; Xu, Jingjia; Wang, Linwei

    2013-01-01

    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735

  20. Preschoolers value those who sanction non-cooperators.

    PubMed

    Vaish, Amrisha; Herrmann, Esther; Markmann, Christiane; Tomasello, Michael

    2016-08-01

    Large-scale human cooperation among unrelated individuals requires the enforcement of social norms. However, such enforcement poses a problem because non-enforcers can free ride on others' costly and risky enforcement. One solution is that enforcers receive benefits relative to non-enforcers. Here we show that this solution becomes functional during the preschool years: 5-year-old (but not 4-year-old) children judged enforcers of norms more positively, preferred enforcers, and distributed more resources to enforcers than to non-enforcers. The ability to sustain not only first-order but also second-order cooperation thus emerges quite early in human ontogeny, providing a viable solution to the problem of higher-order cooperation. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Demographically corrected norms for the Brief Visuospatial Memory Test-revised and Hopkins Verbal Learning Test-revised in monolingual Spanish speakers from the U.S.-Mexico border region.

    PubMed

    Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R

    2007-03-01

    The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.

  2. Generally representative is representative of none: commentary on the pitfalls of IQ test standardization in multicultural settings.

    PubMed

    Shuttleworth-Edwards, A B

    2016-10-01

    The aim of this paper is to address the issue of IQ testing within the multicultural context, with a focus on the adequacy of nationwide population-based norms vs. demographically stratified within-group norms for valid assessment purposes. Burgeoning cultural diversity worldwide creates a pressing need to cultivate culturally fair psychological assessment practices. Commentary is provided to highlight sources of test-taking bias on tests of intellectual ability that may incur invalid placement and diagnostic decisions in multicultural settings. Methodological aspects of population vs. within-group norming solutions are delineated and the challenges of culturally relevant norm development are discussed. Illustrative South African within-group comparative data are supplied to support the review. A critical evaluation of the South African WAIS-III and the WAIS-IV standardizations further serves to exemplify the issues. A flaw in both South African standardizations is failure to differentiate between African first language individuals with a background of advantaged education vs. those from educationally disadvantaged settings. In addition, the standardizations merge the performance outcomes of distinct racial/ethnic groups that are characterized by differentially advantaged or disadvantaged backgrounds. Consequently, the conversion tables are without relevance for any one of the disparate South African cultural groups. It is proposed that the traditional notion of a countrywide unitary norming (also known as 'population-based norms') of an IQ test is an unsatisfactory model for valid assessment practices in diverse cultural contexts. The challenge is to develop new solutions incorporating data from finely stratified within-group norms that serve to reveal rather than obscure cross-cultural disparity in cognitive test performance.

  3. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    NASA Astrophysics Data System (ADS)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  4. The roles of outlet density and norms in alcohol use disorder.

    PubMed

    Ahern, Jennifer; Balzer, Laura; Galea, Sandro

    2015-06-01

    Alcohol outlet density and norms shape alcohol consumption. However, due to analytic challenges we do not know: (a) if alcohol outlet density and norms also shape alcohol use disorder, and (b) whether they act in combination to shape disorder. We applied a new targeted minimum loss-based estimator for rare outcomes (rTMLE) to a general population sample from New York City (N = 4000) to examine the separate and combined relations of neighborhood alcohol outlet density and norms around drunkenness with alcohol use disorder. Alcohol use disorder was assessed using the World Mental Health Comprehensive International Diagnostic Interview (WMH-CIDI) alcohol module. Confounders included demographic and socioeconomic characteristics, as well as history of drinking prior to residence in the current neighborhood. Alcohol use disorder prevalence was 1.78%. We found a marginal risk difference for alcohol outlet density of 0.88% (95% CI 0.00-1.77%), and for norms of 2.05% (95% CI 0.89-3.21%), adjusted for confounders. While each exposure had a substantial relation with alcohol use disorder, there was no evidence of additive interaction between the exposures. Results indicate that the neighborhood environment shapes alcohol use disorder. Despite the lack of additive interaction, each exposure had a substantial relation with alcohol use disorder and our findings suggest that alteration of outlet density and norms together would likely be more effective than either one alone. Important next steps include development and testing of multi-component intervention approaches aiming to modify alcohol outlet density and norms toward reducing alcohol use disorder. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  6. Application of Mixed H2/H Infinity Optimization

    DTIC Science & Technology

    1991-11-01

    Standard Form .................. . 29 4.4 a-Plot of the Open Loop System .. ......... .. 31 4.5 o-Plot of the H2 & H. Optimal Td ........... . 32 4.6 o...o.34 4.9 o-Plot of Mixed SolutiCo,,, Te.. .......... 35 4.10 a-Plot of H. Central Solution Td ........ 36 4.11 a-Plot of the Mixed Controllers...norm = 3.7) .. .. ......... 41 4.18 a3-Plot of Ted (V-norm 2.8) .......... o.42 v 4.19 a-Plot of Td (ac-norm = 2.5) . . . . . .. 0 42 4.20 a-Plot

  7. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  8. Clinical ethics and values: how do norms evolve from practice?

    PubMed

    Spranzi, Marta

    2013-02-01

    Bioethics laws in France have just undergone a revision process. The bioethics debate is often cast in terms of ethical principles and norms resisting emerging social and technological practices. This leads to the expression of confrontational attitudes based on widely differing interpretations of the same principles and values, and ultimately results in a deadlock. In this paper I would like to argue that focusing on values, as opposed to norms and principles, provides an interesting perspective on the evolution of norms. As Joseph Raz has convincingly argued, "life-building" values and practices are closely intertwined. Precisely because values have a more indeterminate meaning than norms, they can be cited as reasons for action by concerned stakeholders, and thus can help us understand how controversial practices, e.g. surrogate motherhood, can be justified. Finally, norms evolve when the interpretations of the relevant values shift and cause a change in the presumptions implicit in the norms. Thus, norms are not a prerequisite of the ethical solution of practical dilemmas, but rather the outcome of the decision-making process itself. Struggling to reach the right decision in controversial clinical ethics situations indirectly causes social and moral values to change and principles to be understood differently.

  9. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  10. Path planning for robotic truss assembly

    NASA Technical Reports Server (NTRS)

    Sanderson, Arthur C.

    1993-01-01

    A new Potential Fields approach to the robotic path planning problem is proposed and implemented. Our approach, which is based on one originally proposed by Munger, computes an incremental joint vector based upon attraction to a goal and repulsion from obstacles. By repetitively adding and computing these 'steps', it is hoped (but not guaranteed) that the robot will reach its goal. An attractive force exerted by the goal is found by solving for the the minimum norm solution to the linear Jacobian equation. A repulsive force between obstacles and the robot's links is used to avoid collisions. Its magnitude is inversely proportional to the distance. Together, these forces make the goal the global minimum potential point, but local minima can stop the robot from ever reaching that point. Our approach improves on a basic, potential field paradigm developed by Munger by using an active, adaptive field - what we will call a 'flexible' potential field. Active fields are stronger when objects move towards one another and weaker when they move apart. An adaptive field's strength is individually tailored to be just strong enough to avoid any collision. In addition to the local planner, a global planning algorithm helps the planner to avoid local field minima by providing subgoals. These subgoals are based on the obstacles which caused the local planner to fail. A best-first search algorithm A* is used for graph search.

  11. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less

  12. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  13. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  14. Robust Principal Component Analysis Regularized by Truncated Nuclear Norm for Identifying Differentially Expressed Genes.

    PubMed

    Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun

    2017-09-01

    Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.

  15. Local descriptive body weight and dietary norms, food availability, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort.

    PubMed

    Carroll, Suzanne J; Paquet, Catherine; Howard, Natasha J; Coffee, Neil T; Adams, Robert J; Taylor, Anne W; Niyonsenga, Theo; Daniel, Mark

    2017-02-02

    Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA 1c ), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA 1c were moderated by food resource availability. HbA 1c , representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m 2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA 1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA 1c . Neither fast-food nor healthful food resource availability were associated with change in HbA 1c . Greater healthful food resource availability reduced the rate of increase in HbA 1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA 1c . Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA 1c . Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes.

  16. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  17. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  18. An Improved Measure of Cognitive Salience in Free Listing Tasks: A Marshallese Example

    ERIC Educational Resources Information Center

    Robbins, Michael C.; Nolan, Justin M.; Chen, Diana

    2017-01-01

    A new free-list measure of cognitive salience, B', is presented, which includes both list position and list frequency. It surpasses other extant measures by being normed to vary between a maximum of 1 and a minimum of 0, thereby making it useful for comparisons irrespective of list length or number of respondents. An illustration of its…

  19. Determining genetic erosion in fourteen Picea chihuahuana Martínez populations.

    Treesearch

    C.Z. Quiñones-Pérez; C. Wehenkel

    2017-01-01

    Picea chihuahuana is an endemic species in Mexico and is considered endangered, according to the Mexican Official Norm (NOM-ECOL-059-2010). This species covers a total area of no more than 300 ha located in at least 40 sites along the Sierra Madre Occidental in Durango and Chihuahua states. A minimum of 42,600 individuals has been estimated,...

  20. On the Minimum Induced Drag of Wings

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2010-01-01

    Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb

  1. On the Minimum Induced Drag of Wings -or- Thinking Outside the Box

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2011-01-01

    Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.

  2. On the Minimum Induced Drag of Wings

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2011-01-01

    Of all the types of drag, induced drag is associated with the creation and generation of lift over wings. Induced drag is directly driven by the span load that the aircraft is flying at. The tools by which to calculate and predict induced drag we use were created by Ludwig Prandtl in 1903. Within a decade after Prandtl created a tool for calculating induced drag, Prandtl and his students had optimized the problem to solve the minimum induced drag for a wing of a given span, formalized and written about in 1920. This solution is quoted in textbooks extensively today. Prandtl did not stop with this first solution, and came to a dramatically different solution in 1932. Subsequent development of this 1932 solution solves several aeronautics design difficulties simultaneously, including maximum performance, minimum structure, minimum drag loss due to control input, and solution to adverse yaw without a vertical tail. This presentation lists that solution by Prandtl, and the refinements by Horten, Jones, Kline, Viswanathan, and Whitcomb.

  3. Parametric study of minimum reactor mass in energy-storage dc-to-dc converters

    NASA Technical Reports Server (NTRS)

    Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.

    1981-01-01

    Closed-form analytical solutions for the design equations of a minimum-mass reactor for a two-winding voltage-or-current step-up converter are derived. A quantitative relationship between the three parameters - minimum total reactor mass, maximum output power, and switching frequency - is extracted from these analytical solutions. The validity of the closed-form solution is verified by a numerical minimization procedure. A computer-aided design procedure using commercially available toroidal cores and magnet wires is also used to examine how the results from practical designs follow the predictions of the analytical solutions.

  4. Fuzzy α-minimum spanning tree problem: definition and solutions

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan

    2016-04-01

    In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.

  5. Social Norms and Global Environmental Challenges: The Complex Interaction of Behaviors, Values, and Policy

    PubMed Central

    Ehrlich, Paul R.; Alston, Lee J.; Arrow, Kenneth; Barrett, Scott; Buchman, Timothy G.; Daily, Gretchen C.; Levin, Bruce; Levin, Simon; Oppenheimer, Michael; Ostrom, Elinor; Saari, Donald

    2014-01-01

    SUMMARY Government policies are needed when people’s behaviors fail to deliver the public good. Those policies will be most effective if they can stimulate long-term changes in beliefs and norms, creating and reinforcing the behaviors needed to solidify and extend the public good.It is often the short-term acceptability of potential policies, rather than their longer-term efficacy, that determines their scope and deployment. The policy process should consider both time scales. The academy, however, has provided insufficient insight on the coevolution of social norms and different policy instruments, thus compromising the capacity of decision makers to craft effective solutions to the society’s most intractable environmental problems. Life scientists could make fundamental contributions to this agenda through targeted research on the emergence of social norms. PMID:25143635

  6. National Institutes of Health Toolbox Emotion Battery for English- and Spanish-speaking adults: normative data and factor-based summary scores.

    PubMed

    Babakhanyan, Ida; McKenna, Benjamin S; Casaletto, Kaitlin B; Nowinski, Cindy J; Heaton, Robert K

    2018-01-01

    The National Institutes of Health Toolbox Emotion Battery (NIHTB-EB) is a "common currency", computerized assessment developed to measure the full spectrum of emotional health. Though comprehensive, the NIHTB-EB's 17 scales may be unwieldy for users aiming to capture more global indices of emotional functioning. NIHTB-EB was administered to 1,036 English-speaking and 408 Spanish-speaking adults as a part of the NIH Toolbox norming project. We examined the factor structure of the NIHTB-EB in English- and Spanish-speaking adults and developed factor analysis-based summary scores. Census-weighted norms were presented for English speakers, and sample-weighted norms were presented for Spanish speakers. Exploratory factor analysis for both English- and Spanish-speaking cohorts resulted in the same 3-factor solution: 1) negative affect, 2) social satisfaction, and 3) psychological well-being. Confirmatory factor analysis supported similar factor structures for English- and Spanish-speaking cohorts. Model fit indices fell within the acceptable/good range, and our final solution was optimal compared to other solutions. Summary scores based upon the normative samples appear to be psychometrically supported and should be applied to clinical samples to further validate the factor structures and investigate rates of problematic emotions in medical and psychiatric populations.

  7. LP-stability for the strong solutions of the Navier-Stokes equations in the whole space

    NASA Astrophysics Data System (ADS)

    Beiraodaveiga, H.; Secchi, P.

    1985-10-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.

  8. Should Pruning be a Pre-Processor of any Linear System?

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    There are many real-world problems whose mathematical models turn out to be linear systems Ax = b , where A is an m by x n matrix. Each equation of the linear system is an information. An information, in a physical problem, such as 4 mangoes, 6 bananas, and 5 oranges cost $10, is mathematically modeled as 4x(sub 1) + 6x(sub 2) + 5x (sub 3) = 10, where x(sub 1), x(sub 2), x(sub 3) are each cost of one mango, that of one banana, and that of one orange, respectively. All the information put together in a specified context, constitutes the physical problem and need not be all distinct. Some of these could be redundant, which cannot be readily identified by inspection. The resulting mathematical model will thus have equations corresponding to this redundant information and hence are linearly dependent and thus superfluous. Consequently, these equations once identified should be better pruned in the process of solving the system. The benefits are (i) less computation and hence less error and consequently a better quality of solution and (ii) reduced storage requirements. In literature, the pruning concept is not in vogue so far although it is most desirable. In a numerical linear system, the system could be slightly inconsistent or inconsistent of varying degree. If the system is too inconsistent, then we should fall back on to the physical problem (PP), check the correctness of the PP derived from the material universe, modify it, if necessary, and then check the corresponding mathematical model (MM) and correct it. In nature/material universe, inconsistency is completely nonexistent. If the MM becomes inconsistent, it could be due to error introduced by the concerned measuring device and/or due to assumptions made on the PP to obtain an MM which is relatively easily solvable or simply due to human error. No measuring device can usually measure a quantity with an accuracy greater that 0.005% or, equivalently with a relative error less than 0.005%. Hence measurement error is unavoidable in a numerical linear system when the quantities are continuous (or even discrete with extremely large number). Assumptions, though not desirable, are usually made when we find the problem sufficiently difficult to be solved within the available means/tools/resources and hence distort the PP and the corresponding MM. The error thus introduced in the system could (not always necessarily though) make the system somewhat inconsistent. If the inconsistency (contradiction) is too much then one should definitely not proceed to solve the system in terms of getting a least-squares solution or a minimum norm solution or the minimum-norm least-squares solution. All these solutions will be invariably of no real-world use. If, on the other hand, inconsistency is reasonably low, i.e. the system is near-consistent or, equivalently, has near-linearly-dependent rows, then the foregoing solutions are useful. Pruning in such a near-consistent system should be performed based on the desired accuracy and on the definition of near-linear dependence. In this article, we discuss pruning over various kinds of linear systems and strongly suggest its use as a pre-processor or as a part of an algorithm. Ideally pruning should (i) be a part of the solution process (algorithm) of the system, (ii) reduce both computational error and complexity of the process, and (iii) take into account the numerical zero defined in the context. These are precisely what we achieve through our proposed O(mn2) algorithm presented in Matlab, that uses a subprogram of solving a single linear equation and that has embedded in it the pruning.

  9. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  10. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  11. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  12. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  13. Selection and Validation of Appropriate Reference Genes for qRT-PCR Analysis in Isatis indigotica Fort.

    PubMed Central

    Li, Tao; Wang, Jing; Lu, Miao; Zhang, Tianyi; Qu, Xinyun; Wang, Zhezhi

    2017-01-01

    Due to its sensitivity and specificity, real-time quantitative PCR (qRT-PCR) is a popular technique for investigating gene expression levels in plants. Based on the Minimum Information for Publication of Real-Time Quantitative PCR Experiments (MIQE) guidelines, it is necessary to select and validate putative appropriate reference genes for qRT-PCR normalization. In the current study, three algorithms, geNorm, NormFinder, and BestKeeper, were applied to assess the expression stability of 10 candidate reference genes across five different tissues and three different abiotic stresses in Isatis indigotica Fort. Additionally, the IiYUC6 gene associated with IAA biosynthesis was applied to validate the candidate reference genes. The analysis results of the geNorm, NormFinder, and BestKeeper algorithms indicated certain differences for the different sample sets and different experiment conditions. Considering all of the algorithms, PP2A-4 and TUB4 were recommended as the most stable reference genes for total and different tissue samples, respectively. Moreover, RPL15 and PP2A-4 were considered to be the most suitable reference genes for abiotic stress treatments. The obtained experimental results might contribute to improved accuracy and credibility for the expression levels of target genes by qRT-PCR normalization in I. indigotica. PMID:28702046

  14. Super Resolution Imaging Applied to Scientific Images

    DTIC Science & Technology

    2007-05-01

    norm has found favor in the image restoration community because it allows discontinuities in its solution. As opposed to the L2 norm it does not...Oxford University Press. 31) Malay Kumar Nema , S.Rakshit and S.Chaudhuri,”Edge Model Based High Resolution Image Genration”Indian Conference on...Society of America, vol. 11, no. 2, pp. 572- 579, February 1994 37) M. Nema , S. Rakshit and S. Chaudhuri, ``Edge Model Based High Resolution Image

  15. Time-domain prefilter design for enhanced tracking and vibration suppression in machine motion control

    NASA Astrophysics Data System (ADS)

    Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong

    2018-05-01

    Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.

  16. Global well-posedness and asymptotic behavior of solutions for the three-dimensional MHD equations with Hall and ion-slip effects

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaopeng; Zhu, Mingxuan

    2018-04-01

    In this paper, we consider the small initial data global well-posedness of solutions for the magnetohydrodynamics with Hall and ion-slip effects in R^3. In addition, we also establish the temporal decay estimates for the weak solutions. With these estimates in hand, we study the algebraic time decay for higher-order Sobolev norms of small initial data solutions.

  17. A latent class regression analysis of men's conformity to masculine norms and psychological distress.

    PubMed

    Wong, Y Joel; Owen, Jesse; Shea, Munyi

    2012-01-01

    How are specific dimensions of masculinity related to psychological distress in specific groups of men? To address this question, the authors used latent class regression to assess the optimal number of latent classes that explained differential relationships between conformity to masculine norms and psychological distress in a racially diverse sample of 223 men. The authors identified a 2-class solution. Both latent classes demonstrated very different associations between conformity to masculine norms and psychological distress. In Class 1 (labeled risk avoiders; n = 133), conformity to the masculine norm of risk-taking was negatively related to psychological distress. In Class 2 (labeled detached risk-takers; n = 90), conformity to the masculine norms of playboy, self-reliance, and risk-taking was positively related to psychological distress, whereas conformity to the masculine norm of violence was negatively related to psychological distress. A post hoc analysis revealed that younger men and Asian American men (compared with Latino and White American men) had significantly greater odds of being in Class 2 versus Class 1. The implications of these findings for future research and clinical practice are examined. (c) 2012 APA, all rights reserved.

  18. Characterizing L1-norm best-fit subspaces

    NASA Astrophysics Data System (ADS)

    Brooks, J. Paul; Dulá, José H.

    2017-05-01

    Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.

  19. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  20. A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Li, Meng; Gu, Xian-Ming; Huang, Chengming; Fei, Mingfa; Zhang, Guoyu

    2018-04-01

    In this paper, a fast linearized conservative finite element method is studied for solving the strongly coupled nonlinear fractional Schrödinger equations. We prove that the scheme preserves both the mass and energy, which are defined by virtue of some recursion relationships. Using the Sobolev inequalities and then employing the mathematical induction, the discrete scheme is proved to be unconditionally convergent in the sense of L2-norm and H α / 2-norm, which means that there are no any constraints on the grid ratios. Then, the prior bound of the discrete solution in L2-norm and L∞-norm are also obtained. Moreover, we propose an iterative algorithm, by which the coefficient matrix is independent of the time level, and thus it leads to Toeplitz-like linear systems that can be efficiently solved by Krylov subspace solvers with circulant preconditioners. This method can reduce the memory requirement of the proposed linearized finite element scheme from O (M2) to O (M) and the computational complexity from O (M3) to O (Mlog ⁡ M) in each iterative step, where M is the number of grid nodes. Finally, numerical results are carried out to verify the correction of the theoretical analysis, simulate the collision of two solitary waves, and show the utility of the fast numerical solution techniques.

  1. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  2. Subcritical transition scenarios via linear and nonlinear localized optimal perturbations in plane Poiseuille flow

    NASA Astrophysics Data System (ADS)

    Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro

    2016-12-01

    Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.

  3. A Framework for Multi-Stakeholder Decision-Making and ...

    EPA Pesticide Factsheets

    This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.

  4. Student Conceptions of Ionic Compounds in Solution and the Influences of Sociochemical Norms on Individual Learning

    ERIC Educational Resources Information Center

    Warfa, Abdi-Rizak M.

    2013-01-01

    Using the symbolic interactionist perspective that meaning is constituted as individuals interact with one another, this study examined how group thinking during cooperative inquiry-based activity on chemical bonding theories shaped and influenced college students' understanding of the properties of ionic compounds in solution. The analysis…

  5. Diagnosis and prevention of norm at Eugene Island 341-A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuler, P.J.; Baudoin, D.A.; Weintritt, D.J.

    1995-12-01

    We conducted a field study at Eugene Island 341-A to develop guidelines for the cost-effective prevention of NORM (Naturally Occurring Radioactive Materials). The specific objectives of this study are to: determine the root cause of the NORM problem at this facility, using a wide variety of diagnostic techniques. consider available engineering options to prevent NORM from forming. determine the most cost-effective engineering solution. An overall objective is to Generalize the results and diagnostic techniques developed for Eugene Island 341-A to other production facilities, especially in the Gulf of Mexico. This study shows that the NORM problem at Eugene Island 34more » 1-A stems from mixing incompatible produced waters at the surface. Wells completed in Sand Block A have a water with relatively high barium concentration, those in Sand Block B and C are high in sulfate, When these waters mix (starting in the production headers), barium sulfate forms. Radium that is present in the produced brines co-precipitates with the barium, thus creating a radioactive barium sulfate scale deposit (NORM). The barium sulfate (and hence NORM) can be prevented by improving the current scale inhibition program. Keys to an effective program are the continual, reliable injection of an appropriate scale inhibitor at an effective dosage, ahead of the point where scaling conditions begin.« less

  6. WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS

    PubMed Central

    MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN

    2013-01-01

    Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935

  7. Tensor completion for estimating missing values in visual data.

    PubMed

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.

  8. Smooth solutions of the Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhozhaev, S I

    2014-02-28

    We consider smooth solutions of the Cauchy problem for the Navier-Stokes equations on the scale of smooth functions which are periodic with respect to x∈R{sup 3}. We obtain existence theorems for global (with respect to t>0) and local solutions of the Cauchy problem. The statements of these depend on the smoothness and the norm of the initial vector function. Upper bounds for the behaviour of solutions in both classes, which depend on t, are also obtained. Bibliography: 10 titles.

  9. Blow-up of solutions to a quasilinear wave equation for high initial energy

    NASA Astrophysics Data System (ADS)

    Li, Fang; Liu, Fang

    2018-05-01

    This paper deals with blow-up solutions to a nonlinear hyperbolic equation with variable exponent of nonlinearities. By constructing a new control function and using energy inequalities, the authors obtain the lower bound estimate of the L2 norm of the solution. Furthermore, the concavity arguments are used to prove the nonexistence of solutions; at the same time, an estimate of the upper bound of blow-up time is also obtained. This result extends and improves those of [1,2].

  10. Control of NORM at Eugene Island 341-A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuler, P.J.; Baudoin, D.A.; Weintritt, D.J.

    1995-12-31

    A field study at Eugene island 341-A, an offshore production platform in the Gulf of Mexico, was conducted to develop strategies for the cost-effective prevention of NORM (Naturally Occurring Radioactive Materials) deposits. The specific objectives of this study were to: (1) Determine the root cause for the NORM deposits at this facility, utilizing different diagnostic techniques. (2) Consider all engineering options that are designed to prevent NORM from forming. (3) Determine the most cost-effective engineering solution. An overall objective was to generalize the diagnostics and control methods developed for Eugene Island 341-A to other oil and gas production facilities, especiallymore » to platforms located in the Gulf of Mexico. This study determined that the NORM deposits found at Eugene Island 341-A stem from commingling incompatible produced waters at the surface. Wells completed in Sand Block A have a water containing a relatively high concentration of barium, while those formation brines in Sand Blocks B and C are high in sulfate. When these waters mix at the start of the fluid treatment facilities on the platform, barium sulfate forms. Radium that is present in the produced brines co-precipitates with the barium, thereby creating a radioactive barium sulfate scale deposit (NORM).« less

  11. Determination of unknown coefficient in a non-linear elliptic problem related to the elastoplastic torsion of a bar

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar; Erdem, Arzu

    2008-08-01

    The inverse problem of determining the unknown coefficient of the non-linear differential equation of torsional creep is studied. The unknown coefficient g = g({xi}2) depends on the gradient{xi} : = |{nabla}u| of the solution u(x), x [isin] {Omega} [sub] Rn, of the direct problem. It is proved that this gradient is bounded in C-norm. This permits one to choose the natural class of admissible coefficients for the considered inverse problem. The continuity in the norm of the Sobolev space H1({Omega}) of the solution u(x;g) of the direct problem with respect to the unknown coefficient g = g({xi}2) is obtained in the following sense: ||u(x;g) - u(x;gm)||1 [->] 0 when gm({eta}) [->] g({eta}) point-wise as m [->] {infty}. Based on these results, the existence of a quasi-solution of the inverse problem in the considered class of admissible coefficients is obtained. Numerical examples related to determination of the unknown coefficient are presented.

  12. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  13. When Do Laws Matter? National Minimum-Age-of-Marriage Laws, Child Rights, and Adolescent Fertility, 1989–2007

    PubMed Central

    Kim, Minzee; Longhofer, Wesley; Boyle, Elizabeth Heger; Nyseth, Hollie

    2014-01-01

    Using the case of adolescent fertility, we ask the questions of whether and when national laws have an effect on outcomes above and beyond the effects of international law and global organizing. To answer these questions, we utilize a fixed-effect time-series regression model to analyze the impact of minimum-age-of-marriage laws in 115 poor- and middle-income countries from 1989 to 2007. We find that countries with strict laws setting the minimum age of marriage at 18 experienced the most dramatic decline in rates of adolescent fertility. Trends in countries that set this age at 18 but allowed exceptions (for example, marriage with parental consent) were indistinguishable from countries that had no such minimum-age-of-marriage law. Thus, policies that adhere strictly to global norms are more likely to elicit desired outcomes. The article concludes with a discussion of what national law means in a diffuse global system where multiple actors and institutions make the independent effect of law difficult to identify. PMID:25525281

  14. Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities

    NASA Astrophysics Data System (ADS)

    Pankov, A. A.

    1983-04-01

    In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.

  15. ℓ1-norm and entanglement in screening out braiding from Yang-Baxter equation associated with Z3 parafermion

    NASA Astrophysics Data System (ADS)

    Yu, Li-Wei; Ge, Mo-Lin

    2017-03-01

    The relationships between quantum entangled states and braid matrices have been well studied in recent years. However, most of the results are based on qubits. In this paper, we investigate the applications of 2-qutrit entanglement in the braiding associated with Z3 parafermion. The 2-qutrit entangled state | Ψ (θ) >, generated by the action of the localized unitary solution R ˘ (θ) of YBE on 2-qutrit natural basis, achieves its maximal ℓ1-norm and maximal von Neumann entropy simultaneously at θ = π / 3. Meanwhile, at θ = π / 3, the solutions of YBE reduces braid matrices, which implies the role of ℓ1-norm and entropy plays in determining real physical quantities. On the other hand, we give a new realization of 4-anyon topological basis by qutrit entangled states, then the 9 × 9 localized braid representation in 4-qutrit tensor product space (C3) ⊗ 4 is reduced to Jones representation of braiding in the 4-anyon topological basis. Hence, we conclude that the entangled states are powerful tools in analysing the characteristics of braiding and R ˘ -matrix.

  16. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  17. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  18. Path planning for assembly of strut-based structures. Thesis

    NASA Technical Reports Server (NTRS)

    Muenger, Rolf

    1991-01-01

    A path planning method with collision avoidance for a general single chain nonredundant or redundant robot is proposed. Joint range boundary overruns are also avoided. The result is a sequence of joint vectors which are passed to a trajectory planner. A potential field algorithm in joint space computes incremental joint vectors delta-q = delta-q(sub a) + delta-q(sub c) + delta-q(sub r). Adding delta-q to the robot's current joint vector leads to the next step in the path. Delta-q(sub a) is obtained by computing the minimum norm solution of the underdetermined linear system J delta-q(sub a) = x(sub a) where x(sub a) is a translational and rotational force vector that attracts the robot to its goal position and orientation. J is the manipulator Jacobian. Delta-q(sub c) is a collision avoidance term encompassing collisions between the robot (links and payload) and obstacles in the environment as well as collisions among links and payload of the robot themselves. It is obtained in joint space directly. Delta-q(sub r) is a function of the current joint vector and avoids joint range overruns. A higher level discrete search over candidate safe positions is used to provide alternatives in case the potential field algorithm encounters a local minimum and thus fails to reach the goal. The best first search algorithm A* is used for graph search. Symmetry properties of the payload and equivalent rotations are exploited to further enlarge the number of alternatives passed to the potential field algorithm.

  19. Potential of mean force between two hydrophobic solutes in water.

    PubMed

    Southall, Noel T; Dill, Ken A

    2002-12-10

    We study the potential of mean force between two nonpolar solutes in the Mercedes Benz model of water. Using NPT Monte Carlo simulations, we find that the solute size determines the relative preference of two solute molecules to come into contact ('contact minimum') or to be separated by a single layer of water ('solvent-separated minimum'). Larger solutes more strongly prefer the contacting state, while smaller solutes have more tendency to become solvent-separated, particularly in cold water. The thermal driving forces oscillate with solute separation. Contacts are stabilized by entropy, whereas solvent-separated solute pairing is stabilized by enthalpy. The free energy of interaction for small solutes is well-approximated by scaled-particle theory. Copyright 2002 Elsevier Science B.V.

  20. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  1. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  3. Are You Your Friends' Friend? Poor Perception of Friendship Ties Limits the Ability to Promote Behavioral Change.

    PubMed

    Almaatouq, Abdullah; Radaelli, Laura; Pentland, Alex; Shmueli, Erez

    2016-01-01

    Persuasion is at the core of norm creation, emergence of collective action, and solutions to 'tragedy of the commons' problems. In this paper, we show that the directionality of friendship ties affect the extent to which individuals can influence the behavior of each other. Moreover, we find that people are typically poor at perceiving the directionality of their friendship ties and that this can significantly limit their ability to engage in cooperative arrangements. This could lead to failures in establishing compatible norms, acting together, finding compromise solutions, and persuading others to act. We then suggest strategies to overcome this limitation by using two topological characteristics of the perceived friendship network. The findings of this paper have significant consequences for designing interventions that seek to harness social influence for collective action.

  4. Are You Your Friends’ Friend? Poor Perception of Friendship Ties Limits the Ability to Promote Behavioral Change

    PubMed Central

    Almaatouq, Abdullah; Radaelli, Laura; Pentland, Alex; Shmueli, Erez

    2016-01-01

    Persuasion is at the core of norm creation, emergence of collective action, and solutions to ‘tragedy of the commons’ problems. In this paper, we show that the directionality of friendship ties affect the extent to which individuals can influence the behavior of each other. Moreover, we find that people are typically poor at perceiving the directionality of their friendship ties and that this can significantly limit their ability to engage in cooperative arrangements. This could lead to failures in establishing compatible norms, acting together, finding compromise solutions, and persuading others to act. We then suggest strategies to overcome this limitation by using two topological characteristics of the perceived friendship network. The findings of this paper have significant consequences for designing interventions that seek to harness social influence for collective action. PMID:27002530

  5. Conditioning of the Stable, Discrete-time Lyapunov Operator

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    The Schatten p-norm condition of the discrete-time Lyapunov operator L(sub A) defined on matrices P is identical with R(sup n X n) by L(sub A) P is identical with P - APA(sup T) is studied for stable matrices A is a member of R(sup n X n). Bounds are obtained for the norm of L(sub A) and its inverse that depend on the spectrum, singular values and radius of stability of A. Since the solution P of the the discrete-time algebraic Lyapunov equation (DALE) L(sub A)P = Q can be ill-conditioned only when either L(sub A) or Q is ill-conditioned, these bounds are useful in determining whether P admits a low-rank approximation, which is important in the numerical solution of the DALE for large n.

  6. Validation of self-directed learning instrument and establishment of normative data for nursing students in taiwan: using polytomous item response theory.

    PubMed

    Cheng, Su-Fen; Lee-Hsieh, Jane; Turton, Michael A; Lin, Kuan-Chia

    2014-06-01

    Little research has investigated the establishment of norms for nursing students' self-directed learning (SDL) ability, recognized as an important capability for professional nurses. An item response theory (IRT) approach was used to establish norms for SDL abilities valid for the different nursing programs in Taiwan. The purposes of this study were (a) to use IRT with a graded response model to reexamine the SDL instrument, or the SDLI, originally developed by this research team using confirmatory factor analysis and (b) to establish SDL ability norms for the four different nursing education programs in Taiwan. Stratified random sampling with probability proportional to size was used. A minimum of 15% of students from the four different nursing education degree programs across Taiwan was selected. A total of 7,879 nursing students from 13 schools were recruited. The research instrument was the 20-item SDLI developed by Cheng, Kuo, Lin, and Lee-Hsieh (2010). IRT with the graded response model was used with a two-parameter logistic model (discrimination and difficulty) for the data analysis, calculated using MULTILOG. Norms were established using percentile rank. Analysis of item information and test information functions revealed that 18 items exhibited very high discrimination and two items had high discrimination. The test information function was higher in this range of scores, indicating greater precision in the estimate of nursing student SDL. Reliability fell between .80 and .94 for each domain and the SDLI as a whole. The total information function shows that the SDLI is appropriate for all nursing students, except for the top 2.5%. SDL ability norms were established for each nursing education program and for the nation as a whole. IRT is shown to be a potent and useful methodology for scale evaluation. The norms for SDL established in this research will provide practical standards for nursing educators and students in Taiwan.

  7. Fast decay of solutions for linear wave equations with dissipation localized near infinity in an exterior domain

    NASA Astrophysics Data System (ADS)

    Ryo, Ikehata

    Uniform energy and L2 decay of solutions for linear wave equations with localized dissipation will be given. In order to derive the L2-decay property of the solution, a useful device whose idea comes from Ikehata-Matsuyama (Sci. Math. Japon. 55 (2002) 33) is used. In fact, we shall show that the L2-norm and the total energy of solutions, respectively, decay like O(1/ t) and O(1/ t2) as t→+∞ for a kind of the weighted initial data.

  8. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  9. Nearest Neighbor Averaging and its Effect on the Critical Level and Minimum Detectable Concentration for Scanning Radiological Survey Instruments that Perform Facility Release Surveys.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L

    2014-08-01

    Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less

  10. Effects of Visual Complexity and Sublexical Information in the Occipitotemporal Cortex in the Reading of Chinese Phonograms: A Single-Trial Analysis with MEG

    ERIC Educational Resources Information Center

    Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec

    2011-01-01

    We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…

  11. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  12. American Sign Language/English bilingual model: a longitudinal study of academic growth.

    PubMed

    Lange, Cheryl M; Lane-Outlaw, Susan; Lange, William E; Sherwood, Dyan L

    2013-10-01

    This study examines reading and mathematics academic growth of deaf and hard-of-hearing students instructed through an American Sign Language (ASL)/English bilingual model. The study participants were exposed to the model for a minimum of 4 years. The study participants' academic growth rates were measured using the Northwest Evaluation Association's Measure of Academic Progress assessment and compared with a national-normed group of grade-level peers that consisted primarily of hearing students. The study also compared academic growth for participants by various characteristics such as gender, parents' hearing status, and secondary disability status and examined the academic outcomes for students after a minimum of 4 years of instruction in an ASL/English bilingual model. The findings support the efficacy of the ASL/English bilingual model.

  13. On the decay of solutions to the 2D Neumann exterior problem for the wave equation

    NASA Astrophysics Data System (ADS)

    Secchi, Paolo; Shibata, Yoshihiro

    We consider the exterior problem in the plane for the wave equation with a Neumann boundary condition and study the asymptotic behavior of the solution for large times. For possible application we are interested to show a decay estimate which does not involve weighted norms of the initial data. In the paper we prove such an estimate, by a combination of the estimate of the local energy decay and decay estimates for the free space solution.

  14. Time domain convergence properties of Lyapunov stable penalty methods

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Sunkel, John

    1991-01-01

    Linear hyperbolic partial differential equations are analyzed using standard techniques to show that a sequence of solutions generated by the Liapunov stable penalty equations approaches the solution of the differential-algebraic equations governing the dynamics of multibody problems arising in linear vibrations. The analysis does not require that the system be conservative and does not impose any specific integration scheme. Variational statements are derived which bound the error in approximation by the norm of the constraint violation obtained in the approximate solutions.

  15. Successfully recruiting, surveying, and retaining college students: a description of methods for the Risk, Religiosity, and Emerging Adulthood Study.

    PubMed

    Berry, Devon M; Bass, Colleen P

    2012-12-01

    The selection of methods that purposefully reflect the norms of the target population increases the likelihood of effective recruitment, data collection, and retention. In the case of research among college students, researchers' appreciation of college student norms might be skewed by unappreciated generational and developmental differences. Our purpose in this article is to illustrate how attention to the generational and developmental characteristics of college students enhanced the methods of the Risk, Religiosity, and Emerging Adulthood study. We address the following challenges related to research with college students: recruitment, communication, data collection, and retention. Solutions incorporating Internet-based applications (e.g., Facebook) and sensitivity to the generational norms of participants (e.g., multiple means of communication) are described in detail. Copyright © 2012 Wiley Periodicals, Inc.

  16. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  17. Optical solitons in nematic liquid crystals: model with saturation effects

    NASA Astrophysics Data System (ADS)

    Borgna, Juan Pablo; Panayotaros, Panayotis; Rial, Diego; de la Vega, Constanza Sánchez F.

    2018-04-01

    We study a 2D system that couples a Schrödinger evolution equation to a nonlinear elliptic equation and models the propagation of a laser beam in a nematic liquid crystal. The nonlinear elliptic equation describes the response of the director angle to the laser beam electric field. We obtain results on well-posedness and solitary wave solutions of this system, generalizing results for a well-studied simpler system with a linear elliptic equation for the director field. The analysis of the nonlinear elliptic problem shows the existence of an isolated global branch of solutions with director angles that remain bounded for arbitrary electric field. The results on the director equation are also used to show local and global existence, as well as decay for initial conditions with sufficiently small L 2-norm. For sufficiently large L 2-norm we show the existence of energy minimizing optical solitons with radial, positive and monotone profiles.

  18. The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition

    NASA Astrophysics Data System (ADS)

    Markopoulos, Panos P.; Chachlakis, Dimitris G.; Papalexakis, Evangelos E.

    2018-04-01

    We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \\times M$ matrices that are to be jointly decomposed. Our contributions are as follows. i) We prove that the problem is equivalent to combinatorial optimization over $N$ antipodal-binary variables. ii) We derive the first two algorithms in the literature for its exact solution. The first algorithm has cost exponential in $N$; the second one has cost polynomial in $N$ (under a mild assumption). Our algorithms are accompanied by formal complexity analysis. iii) We conduct numerical studies to compare the performance of exact L1-TUCKER2 (proposed) with standard HOSVD, HOOI, GLRAM, PCA, L1-PCA, and TPCA-L1. Our studies show that L1-TUCKER2 outperforms (in tensor approximation) all the above counterparts when the processed data are outlier corrupted.

  19. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  20. Comparing solutions to the expectancy-value muddle in the theory of planned behaviour.

    PubMed

    O' Sullivan, B; McGee, H; Keegan, O

    2008-11-01

    The authors of the Theories of Reasoned Action (TRA) and Planned Behaviour (TPB) recommended a method for statistically analysing the relationship between the indirect belief-based measures and the direct measures of attitude, subjective norm, and perceived behavioural control (PBC). However, there is a growing awareness that this yields statistically uninterpretable results. This study's objective was to compare two solutions to what has been called the 'expectancy-value muddle'. These solutions were (i) optimal scoring of modal beliefs and (ii) individual beliefs without multiplicative composites. Cross-sectional data were collected by telephone interview. Participants were 110 first-degree relatives (FDRs) of patients diagnosed with colorectal cancer (CRC), who were offered CRC screening in the study hospital (83% response rate). Participants were asked to rate the TPB constructs in relation to attending for CRC screening. There was no significant difference in the correlation between behavioural beliefs and attitude for rescaled modal and individual beliefs. This was also the case for control beliefs and PBC. By contrast, there was a large correlation between rescaled modal normative beliefs and subjective norm, whereas individual normative beliefs did not correlate with subjective norm. Using individual beliefs without multiplicative composites allows for a fairly unproblematic interpretation of the relationship between the indirect and direct TPB constructs (French & Hankins, 2003). Therefore, it is recommended that future studies consider using individual measures of behavioural and control beliefs without multiplicative composites and examine a different way of measuring individual normative beliefs without multiplicative composites to that used in this study.

  1. Structural Change and Interaction Behavior in Multimodal Networks

    DTIC Science & Technology

    2010-07-30

    S̃q~v = PD( ∑ p Sq→p)− 1 2~v, so λ and D( ∑ p Sq→p) − 1 2~v are an eigenvalue-eigenvector pair for P. By the Perron - Frobenius theorem, we know that λ... Frobenius norm, and α = 11+γ . The closed form solution is F ∗ p→q = (1 − α)(Inq − αS̃q)−1ATp→q [30, 26]. 4 Experiment We evaluated our method for...of mode Xp and the jth cluster of Xq. An approximate factorization is then achieved by minimizing a loss function comprised of the Frobenius norms of

  2. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1989-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.

  3. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1992-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.

  4. An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions

    NASA Astrophysics Data System (ADS)

    Zahr, M. J.; Persson, P.-O.

    2018-07-01

    This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs' phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs' phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately h / 2, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and undershoot in the solution, i.e., Gibbs' phenomena, may make it impossible to solve the discrete conservation law on non-aligned meshes. Therefore, we advocate a gradient-based, full space solver where the mesh and conservation law solution converge to their optimal values simultaneously and therefore never require the solution of the discrete conservation law on a non-aligned mesh. The merit of the proposed method is demonstrated on a number of one- and two-dimensional model problems including the L2 projection of discontinuous functions, Burgers' equation with a discontinuous source term, transonic flow through a nozzle, and supersonic flow around a bluff body. We demonstrate optimal O (h p + 1) convergence rates in the L1 norm for up to polynomial order p = 6 and show that accurate solutions can be obtained on extremely coarse meshes.

  5. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  6. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  7. Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  8. Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  9. Boundary Closures for Fourth-order Energy Stable Weighted Essentially Non-Oscillatory Finite Difference Schemes

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.; Yamaleev, Nail K.; Frankel, Steven H.

    2009-01-01

    A general strategy exists for constructing Energy Stable Weighted Essentially Non Oscillatory (ESWENO) finite difference schemes up to eighth-order on periodic domains. These ESWENO schemes satisfy an energy norm stability proof for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, boundary closures are developed for the fourth-order ESWENO scheme that maintain wherever possible the WENO stencil biasing properties, while satisfying the summation-by-parts (SBP) operator convention, thereby ensuring stability in an L2 norm. Second-order, and third-order boundary closures are developed that achieve stability in diagonal and block norms, respectively. The global accuracy for the second-order closures is three, and for the third-order closures is four. A novel set of non-uniform flux interpolation points is necessary near the boundaries to simultaneously achieve 1) accuracy, 2) the SBP convention, and 3) WENO stencil biasing mechanics.

  10. A minimum entropy principle in the gas dynamics equations

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1986-01-01

    Let u(x bar,t) be a weak solution of the Euler equations, governing the inviscid polytropic gas dynamics; in addition, u(x bar, t) is assumed to respect the usual entropy conditions connected with the conservative Euler equations. We show that such entropy solutions of the gas dynamics equations satisfy a minimum entropy principle, namely, that the spatial minimum of their specific entropy, (Ess inf s(u(x,t)))/x, is an increasing function of time. This principle equally applies to discrete approximations of the Euler equations such as the Godunov-type and Lax-Friedrichs schemes. Our derivation of this minimum principle makes use of the fact that there is a family of generalized entrophy functions connected with the conservative Euler equations.

  11. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  12. On the Minimum Induced Drag of Wings

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2015-01-01

    Birds do not require the use of vertical tails. They do not appear to have any mechanism by which to control their yaw. As an example the albatross is notable in this regard. The authors believe this is possible because of a unique adaptation by which there exists a triple-optimal solution that provides the maximum aerodynamic efficiency, the minimum structural weight, and it provides for coordination of control in roll and yaw. Until now, this solution has eluded researchers, and remained unknown. Here it is shown that the correct specification of spanload provides for all three solutions at once, maximum aerodynamic efficiency, minimum structural weight, and coordinated control. The implications of this result has far reaching effects on the design of aircraft, as well as dramatic efficiency improvement.

  13. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  14. Fraction of exhaled nitric oxide (FeNO ) norms in healthy North African children 5-16 years old.

    PubMed

    Rouatbi, Sonia; Alqodwa, Ashraf; Ben Mdella, Samia; Ben Saad, Helmi

    2013-10-01

    (i) To identify factors that influence the FeNO values in healthy North African, Arab children aged 6-16 years; (ii) to test the applicability and reliability of the previously published FeNO norms; and (iii) if needed, to establish FeNO norms in this population, and to prospectively assess its reliability. This was a cross-sectional analytical study. A convenience sample of healthy Tunisian children, aged 6-16 years was recruited. First subjects have responded to two questionnaires, and then FeNO levels were measured by an online method with electrochemical analyzer (Medisoft, Sorinnes [Dinant], Belgium). Anthropometric and spirometric data were collected. Simple and a multiple linear regressions were determined. The 95% confidence interval (95% CI) and upper limit of normal (ULN) were defined. Two hundred eleven children (107 boys) were retained. Anthropometric data, gender, socioeconomic level, obesity or puberty status, and sports activity were not independent influencing variables. Total sample FeNO data appeared to be influenced only by maximum mid expiratory flow (l sec(-1) ; r(2)  = 0.0236, P = 0.0516). For boys, only 1st second forced expiratory volume (l) explains a slight (r(2)  = 0.0451) but significant FeNO variability (P = 0.0281). For girls, FeNO was not significantly correlated with any children determined data. For North African/Arab children, FeNO values were significantly lower than in other populations and the available published FeNO norms did not reliably predict FeNO in our population. The mean ± SD (95% CI ULN, minimum-maximum) of FeNO (ppb) for the total sample was 5.0 ± 2.9 (5.4, 1.0-17.0). For North African, Arab children of any age, any FeNO value greater than 17.0 ppb may be considered abnormal. Finally, in an additional group of children prospectively assessed, we found no child with a FeNO higher than 17.0 ppb. Our FeNO norms enrich the global repository of FeNO norms the pediatrician can use to choose the most appropriate norms based on children's location or ethnicity. © 2012 Wiley Periodicals, Inc.

  15. Multiple positive solutions for a class of integral inclusions

    NASA Astrophysics Data System (ADS)

    Hong, Shihuang

    2008-04-01

    This paper deals with sufficient conditions for the existence of at least two positive solutions for a class of integral inclusions arising in the traffic theory. To show our main results, we apply a norm-type expansion and compression fixed point theorem for multivalued map due to Agarwal and O'Regan [A note on the existence of multiple fixed points for multivalued maps with applications, J. Differential Equation 160 (2000) 389-403].

  16. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  17. A Minimum (Delta)V Orbit Maintenance Strategy for Low-Altitude Missions Using Burn Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2011-01-01

    Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.

  18. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  19. Application of gamma-ray spectrometry in a NORM industry for its radiometrical characterization

    NASA Astrophysics Data System (ADS)

    Mantero, J.; Gázquez, M. J.; Hurtado, S.; Bolívar, J. P.; García-Tenorio, R.

    2015-11-01

    Industrial activities involving Naturally Occurring Radioactive Materials (NORM) are found among the most important industrial sectors worldwide as oil/gas facilities, metal production, phosphate Industry, zircon treatment, etc. being really significant the radioactive characterization of the materials involved in their production processes in order to assess the potential radiological risk for workers or natural environment. High resolution gamma spectrometry is a versatile non-destructive radiometric technique that makes simultaneous determination of several radionuclides possible with little sample preparation. However NORM samples cover a wide variety of densities and composition, as opposed to the standards used in gamma efficiency calibration, which are either water-based solutions or standard/reference sources of similar composition. For that reason self-absorption correction effects (especially in the low energy range) must be considered individually in every sample. In this work an experimental and a semi-empirical methodology of self-absorption correction were applied to NORM samples, and the obtained results compared critically, in order to establish the best practice in relation to the circumstances of an individual laboratory. This methodology was applied in samples coming from a TiO2 factory (NORM industry) located in the south-west of Spain where activity concentration of several radionuclides from the Uranium and Thorium series through the production process was measured. These results will be shown in this work.

  20. Multiple stimulus reversible hydrogels

    DOEpatents

    Gutowska, Anna; Krzyminski, Karol J.

    2003-12-09

    A polymeric solution capable of gelling upon exposure to a critical minimum value of a plurality of environmental stimuli is disclosed. The polymeric solution may be an aqueous solution utilized in vivo and capable of having the gelation reversed if at least one of the stimuli fall below, or outside the range of, the critical minimum value. The aqueous polymeric solution can be used either in industrial or pharmaceutical environments. In the medical environment, the aqueous polymeric solution is provided with either a chemical or radioisotopic therapeutic agent for delivery to a specific body part. The primary advantage of the process is that exposure to one environmental stimuli alone will not cause gelation, thereby enabling the therapeutic agent to be conducted through the body for relatively long distances without gelation occurring.

  1. Multiple stimulus reversible hydrogels

    DOEpatents

    Gutowska, Anna; Krzyminski, Karol J.

    2006-04-25

    A polymeric solution capable of gelling upon exposure to a critical minimum value of a plurality of environmental stimuli is disclosed. The polymeric solution may be an aqueous solution utilized in vivo and capable of having the gelation reversed if at least one of the stimuli fall below, or outside the range of, the critical minimum value. The aqueous polymeric solution can be used either in industrial or pharmaceutical environments. In the medical environment, the aqueous polymeric solution is provided with either a chemical or radioisotopic therapeutic agent for delivery to a specific body part. The primary advantage of the process is that exposure to one environmental stimuli alone will not cause gelation, thereby enabling the therapeutic agent to be conducted through the body for relatively long distances without gelation occurring.

  2. Simulating the minimum core for hydrophobic collapse in globular proteins.

    PubMed Central

    Tsai, J.; Gerstein, M.; Levitt, M.

    1997-01-01

    To investigate the nature of hydrophobic collapse considered to be the driving force in protein folding, we have simulated aqueous solutions of two model hydrophobic solutes, methane and isobutylene. Using a novel methodology for determining contacts, we can precisely follow hydrophobic aggregation as it proceeds through three stages: dispersed, transition, and collapsed. Theoretical modeling of the cluster formation observed by simulation indicates that this aggregation is cooperative and that the simulations favor the formation of a single cluster midway through the transition stage. This defines a minimum solute hydrophobic core volume. We compare this with protein hydrophobic core volumes determined from solved crystal structures. Our analysis shows that the solute core volume roughly estimates the minimum core size required for independent hydrophobic stabilization of a protein and defines a limiting concentration of nonpolar residues that can cause hydrophobic collapse. These results suggest that the physical forces driving aggregation of hydrophobic molecules in water is indeed responsible for protein folding. PMID:9416609

  3. Challenging dominant norms of masculinity for HIV prevention.

    PubMed

    MacPhail, Catherine

    2003-01-01

    Within South Africa there is a growing HIV epidemic, particularly among young heterosexual people. A recent report (NMF/HSRC, 2002) indicates that levels of HIV infection among young people aged 15-24 years are 9.3% although other studies in more specific locations have shown levels to be higher than this. One of the best means of developing successful and innovative HIV prevention programmes for young people is to enhance our understandings of youth sexuality and the manner in which dominant norms contribute to the spread of sexually transmitted diseases. Social norms of masculinity are particularly important in this regard, as the manner in which 'normal' men are defined such as through acquisition of multiple partners, power over women and negative attitudes towards condoms, are often in conflict with the true emotional vulnerabilities of young men. Given the strong influence of peer groups on young people and the belief that one of the solutions to behaviour change lies in peer renegotiation of dominant norms, there is the need to begin to investigate young men who challenge dominant norms of masculinity. It is in investigating their points of view that a platform for the deconstruction of stereotypical masculinities and the reconstruction of new norms can be formed. The paper begins to consider these counter normative ideas through highlighting the discussions of young South African men aged 13-25 years in focus groups and in-depth individual interviews conducted in Gauteng Province. It is apparent that among this group there are young men challenging normative views of masculinity in a manner that could be harnessed within HIV prevention initiatives.

  4. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  5. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  6. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  7. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  8. Robust control of systems with real parameter uncertainty and unmodelled dynamics

    NASA Technical Reports Server (NTRS)

    Chang, Bor-Chin; Fischl, Robert

    1991-01-01

    During this research period we have made significant progress in the four proposed areas: (1) design of robust controllers via H infinity optimization; (2) design of robust controllers via mixed H2/H infinity optimization; (3) M-delta structure and robust stability analysis for structured uncertainties; and (4) a study on controllability and observability of perturbed plant. It is well known now that the two-Riccati-equation solution to the H infinity control problem can be used to characterize all possible stabilizing optimal or suboptimal H infinity controllers if the optimal H infinity norm or gamma, an upper bound of a suboptimal H infinity norm, is given. In this research, we discovered some useful properties of these H infinity Riccati solutions. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H infinity norm. We also set up a detailed procedure for applying the H infinity theory to robust control systems design. The desire to design controllers with H infinity robustness but H(exp 2) performance has recently resulted in mixed H(exp 2) and H infinity control problem formulation. The mixed H(exp 2)/H infinity problem have drawn the attention of many investigators. However, solution is only available for special cases of this problem. We formulated a relatively realistic control problem with H(exp 2) performance index and H infinity robustness constraint into a more general mixed H(exp 2)/H infinity problem. No optimal solution yet is available for this more general mixed H(exp 2)/H infinity problem. Although the optimal solution for this mixed H(exp 2)/H infinity control has not yet been found, we proposed a design approach which can be used through proper choice of the available design parameters to influence both robustness and performance. For a large class of linear time-invariant systems with real parametric perturbations, the coefficient vector of the characteristic polynomial is a multilinear function of the real parameter vector. Based on this multilinear mapping relationship together with the recent developments for polytopic polynomials and parameter domain partition technique, we proposed an iterative algorithm for coupling the real structured singular value.

  9. Safe Ride Standards for Casualty Evacuation Using Unmanned Aerial Vehicles (Normes de transport sans danger pour l’evacuation des blesses par vehicules aeriens sans pilote)

    DTIC Science & Technology

    2012-12-01

    requirements as part of an overall medical support concept In this document several potential CONOPS proposals are added as food for thought (see Chapter 4...safe flight minimums for manned flight; • En route or terminal environment (landing zone) is contaminated by an industrial spill or by a CBRN event...Further, the U.S. Food and Drug Administration (FDA) and other national/international medical regulatory authorities have requirements for portable

  10. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  11. A geometric multigrid preconditioning strategy for DPG system matrices

    DOE PAGES

    Roberts, Nathan V.; Chan, Jesse

    2017-08-23

    Here, the discontinuous Petrov–Galerkin (DPG) methodology of Demkowicz and Gopalakrishnan (2010, 2011) guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. A key question that has not yet been answered in general – though there are some results for Poisson, e.g.– is how best to precondition the DPG system matrix, so that iterative solvers may be used to allow solution of large-scale problems.

  12. Multiple Objective Evolution Strategies (MOES): A User’s Guide to Running the Software

    DTIC Science & Technology

    2014-11-01

    L2-norm distance is computed in parameter space between each pair of solutions in the elite population and tested against the tolerance Dclone, which...the most efficient solutions to the test problems in the Input_Files directory. The developers recommend using mu,kappa,lambda. The mu,kappa,lambda...be used as a sanity test for complicated multimodal problems. Whenever the optimum cannot be reached by a local search, the evolutionary results

  13. Approximations and Solution Estimates in Optimization

    DTIC Science & Technology

    2016-04-06

    comprehensive descriptions of epi-convergence and its connections to variational analysis broadly. Our motivation for going beyond normed linear spaces , which...proper, every closed ball in this metric space is compact and the existence of solutions of such optimal fitting problems is more easily established...lsc-fcns(X), dl(fν , f) → 0 implies that fν epi-converges to f. We recall that a metric space is proper if every closed ball in that space is compact

  14. Global, decaying solutions of a focusing energy-critical heat equation in R4

    NASA Astrophysics Data System (ADS)

    Gustafson, Stephen; Roxanas, Dimitrios

    2018-05-01

    We study solutions of the focusing energy-critical nonlinear heat equation ut = Δu - | u|2 u in R4. We show that solutions emanating from initial data with energy and H˙1-norm below those of the stationary solution W are global and decay to zero, via the "concentration-compactness plus rigidity" strategy of Kenig-Merle [33,34]. First, global such solutions are shown to dissipate to zero, using a refinement of the small data theory and the L2-dissipation relation. Finite-time blow-up is then ruled out using the backwards-uniqueness of Escauriaza-Seregin-Sverak [17,18] in an argument similar to that of Kenig-Koch [32] for the Navier-Stokes equations.

  15. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  16. Community Norms, Enforcement Of Minimum Legal Drinking Age Laws, Personal Beliefs And Underage Drinking: An Explanatory Model

    PubMed Central

    Grube, Joel W.; Paschall, Mallie J.

    2009-01-01

    Strategies to enforce underage drinking laws are aimed at reducing youth access to alcohol from commercial and social sources and deterring its possession and use. However, little is known about the processes through which enforcement strategies may affect underage drinking. The purpose of the current study is to present and test a conceptual model that specifies possible direct and indirect relationships among adolescents’ perception of community alcohol norms, enforcement of underage drinking laws, personal beliefs (perceived parental disapproval of alcohol use, perceived alcohol availability, perceived drinking by peers, perceived harm and personal disapproval of alcohol use), and their past-30-day alcohol use. This study used data from 17,830 middle and high school students who participated in the 2007 Oregon Health Teens Survey. Structural equations modeling indicated that perceived community disapproval of adolescents’ alcohol use was directly and positively related to perceived local police enforcement of underage drinking laws. In addition, adolescents’ personal beliefs appeared to mediate the relationship between perceived enforcement of underage drinking laws and past-30-day alcohol use. Enforcement of underage drinking laws appeared to partially mediate the relationship between perceived community disapproval and personal beliefs related to alcohol use. Results of this study suggests that environmental prevention efforts to reduce underage drinking should target adults’ attitudes and community norms about underage drinking as well as the beliefs of youth themselves. PMID:20135210

  17. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  18. At-Least Version of the Generalized Minimum Spanning Tree Problem: Optimization Through Ant Colony System and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Janich, Karl W.

    2005-01-01

    The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.

  19. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    PubMed

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  20. Evaluation of Complex Human Performance: The Promise of Computer-Based Simulation

    ERIC Educational Resources Information Center

    Newsom, Robert S.; And Others

    1978-01-01

    For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…

  1. Portion size me: plate-size induced consumption norms and win-win solutions for reducing food intake and waste.

    PubMed

    Wansink, Brian; van Ittersum, Koert

    2013-12-01

    Research on the self-serving of food has empirically ignored the role that visual consumption norms play in determining how much food we serve on different sized dinnerware. We contend that dinnerware provides a visual anchor of an appropriate fill-level, which in turn, serves as a consumption norm (Study 1). The trouble with these dinnerware-suggested consumption norms is that they vary directly with dinnerware size--Study 2 shows Chinese buffet diners with large plates served 52% more, ate 45% more, and wasted 135% more food than those with smaller plates. Moreover, education does not appear effective in reducing such biases. Even a 60-min, interactive, multimedia warning on the dangers of using large plates had seemingly no impact on 209 health conference attendees, who subsequently served nearly twice as much food when given a large buffet plate 2 hr later (Study 3). These findings suggest that people may have a visual plate-fill level--perhaps 70% full--that they anchor on when determining the appropriate consumption norm and serving themselves. Study 4 suggests that the Delboeuf illusion offers an explanation why people do not fully adjust away from this fill-level anchor and continue to be biased across a large range of dishware sizes. These findings have surprisingly wide-ranging win-win implications for the welfare of consumers as well as for food service managers, restaurateurs, packaged goods managers, and public policy officials. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Relevance, Derogation and Permission

    NASA Astrophysics Data System (ADS)

    Stolpe, Audun

    We show that a recently developed theory of positive permission based on the notion of derogation is hampered by a triviality result that indicates a problem with the underlying full-meet contraction operation. We suggest a solution that presupposes a particular normal form for codes of norms, adapted from the theory of relevance through propositional letter sharing. We then establish a correspondence between contractions on sets of norms in input/output logic (derogations), and AGM-style contractions on sets of formulae, and use it as a bridge to migrate results on propositional relevance from the latter to the former idiom. Changing the concept accordingly we show that positive permission now incorporates a relevance requirement that wards off triviality.

  3. Exact Exchange calculations for periodic systems: a real space approach

    NASA Astrophysics Data System (ADS)

    Natan, Amir; Marom, Noa; Makmal, Adi; Kronik, Leeor; Kuemmel, Stephan

    2011-03-01

    We present a real-space method for exact-exchange Kohn-Sham calculations of periodic systems. The method is based on self-consistent solutions of the optimized effective potential (OEP) equation on a three-dimensional non-orthogonal grid, using norm conserving pseudopotentials. These solutions can be either exact, using the S-iteration approach, or approximate, using the Krieger, Li, and Iafrate (KLI) approach. We demonstrate, using a variety of systems, the importance of singularity corrections and use of appropriate pseudopotentials.

  4. Corner-corrected diagonal-norm summation-by-parts operators for the first derivative with increased order of accuracy

    NASA Astrophysics Data System (ADS)

    Del Rey Fernández, David C.; Boom, Pieter D.; Zingg, David W.

    2017-02-01

    Combined with simultaneous approximation terms, summation-by-parts (SBP) operators offer a versatile and efficient methodology that leads to consistent, conservative, and provably stable discretizations. However, diagonal-norm operators with a repeating interior-point operator that have thus far been constructed suffer from a loss of accuracy. While on the interior, these operators are of degree 2p, at a number of nodes near the boundaries, they are of degree p, and therefore of global degree p - meaning the highest degree monomial for which the operators are exact at all nodes. This implies that for hyperbolic problems and operators of degree greater than unity they lead to solutions with a global order of accuracy lower than the degree of the interior-point operator. In this paper, we develop a procedure to construct diagonal-norm first-derivative SBP operators that are of degree 2p at all nodes and therefore can lead to solutions of hyperbolic problems of order 2 p + 1. This is accomplished by adding nonzero entries in the upper-right and lower-left corners of SBP operator matrices with a repeating interior-point operator. This modification necessitates treating these new operators as elements, where mesh refinement is accomplished by increasing the number of elements in the mesh rather than increasing the number of nodes. The significant improvements in accuracy of this new family, for the same repeating interior-point operator, are demonstrated in the context of the linear convection equation.

  5. Minimum Altitude-Loss Soaring in a Specified Vertical Wind Distribution

    NASA Technical Reports Server (NTRS)

    Pierson, B. L.; Chen, I.

    1979-01-01

    Minimum altitude-loss flight of a sailplane through a given vertical wind distribution is discussed. The problem is posed as an optimal control problem, and several numerical solutions are obtained for a sinusoidal wind distribution.

  6. A minimum propellant solution to an orbit-to-orbit transfer using a low thrust propulsion system

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1991-01-01

    The Space Exploration Initiative is considering the use of low thrust (nuclear electric, solar electric) and intermediate thrust (nuclear thermal) propulsion systems for transfer to Mars and back. Due to the duration of such a mission, a low thrust minimum-fuel solution is of interest; a savings of fuel can be substantial if the propulsion system is allowed to be turned off and back on. This switching of the propulsion system helps distinguish the minimal-fuel problem from the well-known minimum-time problem. Optimal orbit transfers are also of interest to the development of a guidance system for orbital maneuvering vehicles which will be needed, for example, to deliver cargoes to the Space Station Freedom. The problem of optimizing trajectories for an orbit-to-orbit transfer with minimum-fuel expenditure using a low thrust propulsion system is addressed.

  7. Elastohydrodynamic lubrication of point contacts. Ph.D. Thesis - Leeds Univ.

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1976-01-01

    A procedure for the numerical solution of the complete, isothermal, elastohydrodynamic lubrication problem for point contacts is given. This procedure calls for the simultaneous solution of the elasticity and Reynolds equations. By using this theory the influence of the ellipticity parameter and the dimensionless speed, load, and material parameters on the minimum and central film thicknesses was investigated. Thirty-four different cases were used in obtaining the fully flooded minimum- and central-film-thickness formulas. Lubricant starvation was also studied. From the results it was possible to express the minimum film thickness for a starved condition in terms of the minimum film thickness for a fully flooded condition, the speed parameter, and the inlet distance. Fifteen additional cases plus three fully flooded cases were used in obtaining this formula. Contour plots of pressure and film thickness in and around the contact have been presented for both fully flooded and starved lubrication conditions.

  8. "Creative solutions": selling cigarettes in a smoke-free world

    PubMed Central

    Smith, E; Malone, R

    2004-01-01

    Objective: To analyse the development and execution of the "Creative Solutions" Benson & Hedges advertising campaign to understand its social, political, and commercial implications. Methods: Searches of the Philip Morris documents and Legacy Tobacco Documents websites for relevant materials; Lexis/Nexis searches of major news and business publications; and denotative and connotative analyses of the advertising imagery. Results: Philip Morris developed the Creative Solutions campaign in an effort to directly confront the successes of the tobacco control movement in establishing new laws and norms that promoted clean indoor air. The campaign's imagery attempted to help smokers and potential smokers overcome the physical and social downsides of smoking cigarettes by managing risk and resolving internal conflict. The slogans suggested a variety of ways for smokers to respond to restrictions on their habit. The campaign also featured information about the Accommodation Program, Philip Morris's attempt to organise opposition to clean indoor air laws. Conclusion: The campaign was a commercial failure, with little impact on sales of the brand. Philip Morris got some exposure for the Accommodation Program and its anti-regulatory position. The lack of commercial response to the ads suggests that they were unable to successfully resolve the contradictions that smokers were increasingly experiencing and confirms the power of changing social norms to counter tobacco industry tactics. PMID:14985598

  9. The stage-value model: Implications for the changing standards of care.

    PubMed

    Görtz, Daniel Patrik; Commons, Michael Lamport

    2015-01-01

    The standard of care is a legal and professional notion against which doctors and other medical personnel are held liable. The standard of care changes as new scientific findings and technological innovations within medicine, pharmacology, nursing and public health are developed and adopted. This study consists of four parts. Part 1 describes the problem and gives concrete examples of its occurrence. The second part discusses the application of the Model of Hierarchical Complexity on the field, giving examples of how standards of care are understood at different behavioral developmental stage. It presents the solution to the problem of standards of care at a Paradigmatic Stage 14. The solution at this stage is a deliberative, communicative process based around why certain norms should or should not apply in each specific case, by the use of "meta-norms". Part 3 proposes a Cross-Paradigmatic Stage 15 view of how the problem of changing standards of care can be solved. The proposed solution is to found the legal procedure in each case on well-established behavioral laws. We maintain that such a behavioristic, scientifically based justice would be much more proficient at effecting restorative legal interventions that create desired behaviors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Optimal Trajectories For Orbital Transfers Using Low And Medium Thrust Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1992-01-01

    For many problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well-known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs which will require more revolutions, thus making the low thrust transfer computational intensive. Here, we consider optimal low and medium thrust orbital transfers.

  11. Balancing selfishness and norm conformity can explain human behavior in large-scale prisoner's dilemma games and can poise human groups near criticality

    NASA Astrophysics Data System (ADS)

    Realpe-Gómez, John; Andrighetto, Giulia; Nardin, Luis Gustavo; Montoya, Javier Antonio

    2018-04-01

    Cooperation is central to the success of human societies as it is crucial for overcoming some of the most pressing social challenges of our time; still, how human cooperation is achieved and may persist is a main puzzle in the social and biological sciences. Recently, scholars have recognized the importance of social norms as solutions to major local and large-scale collective action problems, from the management of water resources to the reduction of smoking in public places to the change in fertility practices. Yet a well-founded model of the effect of social norms on human cooperation is still lacking. Using statistical-physics techniques and integrating findings from cognitive and behavioral sciences, we present an analytically tractable model in which individuals base their decisions to cooperate both on the economic rewards they obtain and on the degree to which their action complies with social norms. Results from this parsimonious model are in agreement with observations in recent large-scale experiments with humans. We also find the phase diagram of the model and show that the experimental human group is poised near a critical point, a regime where recent work suggests living systems respond to changing external conditions in an efficient and coordinated manner.

  12. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  13. Numerical simulation of KdV equation by finite difference method

    NASA Astrophysics Data System (ADS)

    Yokus, A.; Bulut, H.

    2018-05-01

    In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.

  14. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  15. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  16. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  17. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  18. L(2) stability for weak solutions of the Navier-Stokes equations in R(3)

    NASA Astrophysics Data System (ADS)

    Secchi, P.

    1985-11-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations is still an open problem. Up to now, the only available global existence theorem (other than for sufficiently small initial data) is that of weak (turbulent) solutions. From both the mathematical and the physical point of view, an interesting property is the stability of such weak solutions. We assume that v(t,x) is a solution, with initial datum vO(x). We suppose that the initial datum is perturbed and consider one weak solution u corresponding to the new initial velocity. Then we prove that, due to viscosity, the perturbed weak solution u approaches in a suitable norm the unperturbed one, as time goes to + infinity, without smallness assumptions on the initial perturbation.

  19. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2014-03-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.

  20. On the functional optimization of a certain class of nonstationary spatial functions

    USGS Publications Warehouse

    Christakos, G.; Paraskevopoulos, P.N.

    1987-01-01

    Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.

  1. A finite-state, finite-memory minimum principle, part 2

    NASA Technical Reports Server (NTRS)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    In part 1 of this paper, a minimum principle was found for the finite-state, finite-memory (FSFM) stochastic control problem. In part 2, conditions for the sufficiency of the minimum principle are stated in terms of the informational properties of the problem. This is accomplished by introducing the notion of a signaling strategy. Then a min-H algorithm based on the FSFM minimum principle is presented. This algorithm converges, after a finite number of steps, to a person - by - person extremal solution.

  2. Integrating Asian Clients' Filial Piety Beliefs into Solution-Focused Brief Therapy

    ERIC Educational Resources Information Center

    Hsu, Wei-Su; Wang, Chiachih D. C.

    2011-01-01

    Culturally sensitive counseling models for non-Western clients are rarely seen in the literature. Because filial piety is a prevailing cultural belief in Taiwanese/Chinese societies and influences a wide range of individual and interpersonal behaviors, counseling and psychotherapy would be most effective when this cultural norm is considered and…

  3. Theory and analysis of statistical discriminant techniques as applied to remote sensing data

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1973-01-01

    Classification of remote earth resources sensing data according to normed exponential density statistics is reported. The use of density models appropriate for several physical situations provides an exact solution for the probabilities of classifications associated with the Bayes discriminant procedure even when the covariance matrices are unequal.

  4. CRTs and NRTs Together.

    ERIC Educational Resources Information Center

    Noggle, Nelson L.

    The potential use of criterion referenced tests (CRT) and norm referenced tests (NRT) in the same testing program is discussed. The advantages and disadvantages of each are listed, and the best solution, a merging, is proposed. To merge CRTs and NRTs into an overall testing program, meaningful and useful to all levels, consideration must be given…

  5. Bystander Education: Bringing a Broader Community Perspective to Sexual Violence Prevention

    ERIC Educational Resources Information Center

    Banyard, Victoria L.; Plante, Elizabethe G.; Moynihan, Mary M.

    2004-01-01

    Recent research documents the problem of sexual violence across communities, often finding its causes to be embedded in community and cultural norms, thus demonstrating the need for community-focused solutions. In this article we synthesize research from community psychology on community change and prevention with more individually focused studies…

  6. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Lenhart, S.M.

    Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature ofmore » the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.« less

  8. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  9. Factor validity and norms for the aberrant behavior checklist in a community sample of children with mental retardation.

    PubMed

    Marshburn, E C; Aman, M G

    1992-09-01

    The Aberrant Behavior Checklist (ABC) is a 58-item rating scale that was developed primarily to measure the effects of pharmacological intervention in individuals living in residential facilities. This study investigated the use of the ABC in a sample of community children with mental retardation. Teacher ratings on the ABC were collected on 666 students attending special classes. The data were factor analyzed and compared with other studies using the ABC. In addition, subscales were analyzed as a function of age, sex, and classroom placement, and preliminary norms were derived. A four-factor solution of the ABC was obtained. Congruence between the four derived factors and corresponding factors from the original ABC was high (congruence coefficients ranged between .87 and .96). Classroom placement and age had significant effects on subscale scores, whereas sex failed to affect ratings. The current results are sufficiently close to the original factor solution that the original scoring method can be used with community samples, although further studies are needed to look at this in more detail.

  10. Panel flutter optimization by gradient projection

    NASA Technical Reports Server (NTRS)

    Pierson, B. L.

    1975-01-01

    A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Bienvenido; Novo, Vicente

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditionsmore » when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.« less

  12. Reconstructing the duty of water: a study of emergent norms in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Wescoat, J. L., Jr.

    2013-06-01

    This paper assesses changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, a line of research useful for anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late-18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for water rights appropriation (e.g., only 40 to 80 acres per cfs). The final section shows that while the duty of water concept has now been eclipsed by other measures and standards of water efficiency, it may have continuing relevance for anticipating if not predicting emerging social values with respect to water.

  13. Low thrust optimal orbital transfers

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1994-01-01

    For many optimal transfer problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs and thus will require more revolutions. In this research, we considered two approaches for solving this problem: a powered flight guidance algorithm previously developed for higher thrust transfers was modified and an 'averaging technique' was investigated.

  14. Structure and anomalous solubility for hard spheres in an associating lattice gas model.

    PubMed

    Szortyka, Marcia M; Girardi, Mauricio; Henriques, Vera B; Barbosa, Marcia C

    2012-08-14

    In this paper we investigate the solubility of a hard-sphere gas in a solvent modeled as an associating lattice gas. The solution phase diagram for solute at 5% is compared with the phase diagram of the original solute free model. Model properties are investigated both through Monte Carlo simulations and a cluster approximation. The model solubility is computed via simulations and is shown to exhibit a minimum as a function of temperature. The line of minimum solubility (TmS) coincides with the line of maximum density (TMD) for different solvent chemical potentials, in accordance with the literature on continuous realistic models and on the "cavity" picture.

  15. A Minimum Delta V Orbit Maintenance Strategy for Low-Altitude Missions Using Burn Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2011-01-01

    Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance Delta V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this Delta V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An low-lunar orbit example demonstrates the Delta V savings from the feasible solution to the optimal solution. The strategy s extensibility to more complex missions is discussed, as well as the limitations of its use.

  16. The existence of minimum speed of traveling wave solutions to a non-KPP isothermal diffusion system

    NASA Astrophysics Data System (ADS)

    Chen, Xinfu; Liu, Guirong; Qi, Yuanwei

    2017-08-01

    The reaction-diffusion system at =axx - abn ,bt = Dbxx + abn, where n ≥ 1 and D > 0, arises from many real-world chemical reactions. Whereas n = 1 is the KPP type nonlinearity, which is much studied and very important results obtained in literature not only in one dimensional spatial domains, but also multi-dimensional spaces, but n > 1 proves to be much harder. One of the interesting features of the system is the existence of traveling wave solutions. In particular, for the traveling wave solution a (x , t) = a (x - vt), b (x , t) = b (x - vt), where v > 0, if we fix lim x → - ∞ ⁡ (a , b) = (0 , 1) it was proved by many authors with different bounds v* (n , D) > 0 such that a traveling wave solution exists for any v ≥v* when n > 1. For the latest progress, see [7]. That is, the traveling wave problem exhibits the mono-stable phenomenon for traveling wave of scalar equation ut =uxx + f (u) with f (0) = f (1) = 0, f (u) > 0 in (0 , 1) and, u = 0 is unstable and u = 1 is stable. A natural and significant question is whether, like the scalar case, there exists a minimum speed. That is, whether there exists a minimum speed vmin > 0 such that traveling wave solution of speed v exists iff v ≥vmin? This is an open question, in spite of many works on traveling wave of the system in last thirty years. This is duo to the reason, unlike the KPP case, the minimum speed cannot be obtained through linear analysis at equilibrium points (a , b) = (0 , 1) and (a , b) = (1 , 0). In this work, we give an affirmative answer to this question.

  17. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  18. Boundary control of elliptic solutions to enforce local constraints

    NASA Astrophysics Data System (ADS)

    Bal, G.; Courdurier, M.

    We present a constructive method to devise boundary conditions for solutions of second-order elliptic equations so that these solutions satisfy specific qualitative properties such as: (i) the norm of the gradient of one solution is bounded from below by a positive constant in the vicinity of a finite number of prescribed points; (ii) the determinant of gradients of n solutions is bounded from below in the vicinity of a finite number of prescribed points. Such constructions find applications in recent hybrid medical imaging modalities. The methodology is based on starting from a controlled setting in which the constraints are satisfied and continuously modifying the coefficients in the second-order elliptic equation. The boundary condition is evolved by solving an ordinary differential equation (ODE) defined via appropriate optimality conditions. Unique continuations and standard regularity results for elliptic equations are used to show that the ODE admits a solution for sufficiently long times.

  19. Uniqueness Results for Weak Leray-Hopf Solutions of the Navier-Stokes System with Initial Values in Critical Spaces

    NASA Astrophysics Data System (ADS)

    Barker, T.

    2018-03-01

    The main subject of this paper concerns the establishment of certain classes of initial data, which grant short time uniqueness of the associated weak Leray-Hopf solutions of the three dimensional Navier-Stokes equations. In particular, our main theorem that this holds for any solenodial initial data, with finite L_2(R^3) norm, that also belongs to certain subsets of {it{VMO}}^{-1}(R^3). As a corollary of this, we obtain the same conclusion for any solenodial u0 belonging to L2(R^3)\\cap \\dot{B}^{-1+3/p}_{p,∞}(R^3), for any 3

  20. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    NASA Astrophysics Data System (ADS)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  1. Investigating Proenvironmental Behavior: The Case of Commuting Mode Choice

    NASA Astrophysics Data System (ADS)

    Trinh, Tu Anh; Phuong Linh Le, Thi

    2018-04-01

    The central aim of this article is to investigate mode choice behavior among commuters in Ho Chi Minh City using disaggregate mode choice model and norm activation theory. A better understanding of commuters’ choice of transport mode provide an opportunity to obtain valuable information on their travel behaviors which help to build a basic for proffering solutions stimulating commuters to switch to public transport, which in turn contribute to deal with traffic problems and environmental issues. Binary logistic regression was employed under disaggregate choice method. Key findings indicated that Demographic factors including Age (-0.308), Married (-9.089), Weather (-8.272); Trip factors including Travel cost (0.437), Travel distance (0.252), and Norm activation theory (Awareness of consequences: AC2 (-1.699), AC4 (2.951), AC6 (-3.523), AC7 (-2.092), AC9 (-3.045), AC11 (+ 2.939), and Personal norms: PN2 (-2.695)) had strong impact on the commuters’ mode choice. Although motorcycle was the major transport mode among commuters, they presented their willingness to switch to bus transport if it had less negative impacts on the environment and their daily living environment.

  2. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  3. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  4. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  6. Treatment of NORM contaminated soil from the oilfields.

    PubMed

    Abdellah, W M; Al-Masri, M S

    2014-03-01

    Uncontrolled disposal of oilfield produced water in the surrounding environment could lead to soil contamination by naturally occurring radioactive materials (NORM). Large volumes of soil become highly contaminated with radium isotopes ((226)Ra and (228)Ra). In the present work, laboratory experiments have been conducted to reduce the activity concentration of (226)Ra in soil. Two techniques were used, namely mechanical separation and chemical treatment. Screening of contaminated soil using vibratory sieve shaker was performed to evaluate the feasibility of particle size separation. The fractions obtained were ranged from less than 38 μm to higher than 300 μm. The results show that (226)Ra activity concentrations vary widely from fraction to fraction. On the other hand, leaching of (226)Ra from soil by aqueous solutions (distilled water, mineral acids, alkaline medias and selective solvents) has been performed. In most cases, relatively low concentrations of radium were transferred to solutions, which indicates that only small portions of radium are present on the surface of soil particles (around 4.6%), while most radium located within soil particles; only concentrated nitric acid was most effective where 50% of (226)Ra was removed to aqueous phase. However, mechanical method was found to be easy and effective, taking into account safety procedures to be followed during the implementation of the blending and homogenization. Chemical extraction methods were found to be less effective. The results obtained in this study can be utilized to approach the final option for disposal of NORM contaminated soil in the oilfields. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A class of systolizable IIR digital filters and its design for proper scaling and minimum output roundoff noise

    NASA Technical Reports Server (NTRS)

    Lei, Shaw-Min; Yao, Kung

    1990-01-01

    A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.

  8. Cosmic acceleration from M theory on twisted spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neupane, Ishwaree P.; Wiltshire, David L.

    2005-10-15

    In a recent paper [I. P. Neupane and D. L. Wiltshire, Phys. Lett. B 619, 201 (2005).] we have found a new class of accelerating cosmologies arising from a time-dependent compactification of classical supergravity on product spaces that include one or more geometric twists along with nontrivial curved internal spaces. With such effects, a scalar potential can have a local minimum with positive vacuum energy. The existence of such a minimum generically predicts a period of accelerated expansion in the four-dimensional Einstein conformal frame. Here we extend our knowledge of these cosmological solutions by presenting new examples and discuss themore » properties of the solutions in a more general setting. We also relate the known (asymptotic) solutions for multiscalar fields with exponential potentials to the accelerating solutions arising from simple (or twisted) product spaces for internal manifolds.« less

  9. Time optimal paths for high speed maneuvering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Lenhart, S.M.

    1993-01-01

    Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature ofmore » the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.« less

  10. Lubrication of rigid ellipsida solids

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Dowson, D.

    1982-01-01

    The influence of geometry on the isothermal hydrodynamic film separating two rigid solids was investigated. The minimum film thickness is derived for fully flooded conjunctions by using the Reynolds boundary conditions. It was found that the minimum film thickness had the same speed, viscosity, and load dependence as Kapitza' classical solution. However, the incorporation of Reynolds boundary conditions resulted in an additional geometry effect. Solutions using the parabolic film approximation are compared by using the exact expression for the film in the analysis. Contour plots are known that indicate in detail the pressure developed between the solids.

  11. Executive Functions: Insights into Ways to Help More Children Thrive

    ERIC Educational Resources Information Center

    Diamond, Adele

    2014-01-01

    Executive functions enable children to pay attention, follow instructions, apply what they have learned, have those "aha!" moments in which they grasp how multiple facts interrelate, think of creative solutions, obey social norms such as waiting their turn and not butting in line or jumping out of their seat, mentally construct a plan,…

  12. The Popeye principle: selling child health in the first nutrition crisis.

    PubMed

    Lovett, Laura

    2005-10-01

    The cartoon character Popeye the Sailor was capable of superhuman feats of strength after eating a can of spinach. Popeye ate spinach because the association of spinach with strength was a product of the first national nutrition crisis in the United States: the 1920s fight against child malnutrition. Spanning the first three decades of the twentieth century, the malnutrition crisis arose from the confluence of many different events including the invention of nutrition science and new standards for height and weight; international food crises created by world war; the rise of consumerism, advertising, and new forms of mass media; and Progressive reformers' conviction that education was a key component of any solution. The history of the malnutrition crisis presented in this essay synthesizes disparate histories concerning advertising, public health, education, consumerism, philanthropy, and Progressive Era reform with original analysis of a major nutrition education program sponsored by the Commonwealth Fund in the 1920s. Because the character of Popeye came to embody one of the nutritional norms advocated in the 1920s, I refer to the influence of culturally constructed social norms on children's beliefs about health and nutrition as the Popeye Principle. The history of the malnutrition crisis demonstrates the importance of understanding the cultural and economic conditions surrounding childhood nutrition, the use and influence of numerical norms, and the mutually reinforcing influences on children's nutritional norms from their parents, peers, teachers, and culture.

  13. On lower bounds for possible blow-up solutions to the periodic Navier-Stokes equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortissoz, Jean C., E-mail: jcortiss@uniandes.edu.co; Montero, Julio A., E-mail: ja.montero907@uniandes.edu.co; Pinilla, Carlos E., E-mail: ce.pinilla108@uniandes.edu.co

    2014-03-15

    We show a new lower bound on the H{sup .3/2} (T{sup 3}) norm of a possible blow-up solution to the Navier-Stokes equation, and also comment on the extension of this result to the whole space. This estimate can be seen as a natural limiting result for Leray's blow-up estimates in L{sup p}(R{sup 3}), 3 < p < ∞. We also show a lower bound on the blow-up rate of a possible blow-up solution of the Navier-Stokes equation in H{sup .5/2} (T{sup 3}), and give the corresponding extension to the case of the whole space.

  14. Hierarchic plate and shell models based on p-extension

    NASA Technical Reports Server (NTRS)

    Szabo, Barna A.; Sahrmann, Glenn J.

    1988-01-01

    Formulations of finite element models for beams, arches, plates and shells based on the principle of virtual work was studied. The focus is on computer implementation of hierarchic sequences of finite element models suitable for numerical solution of a large variety of practical problems which may concurrently contain thin and thick plates and shells, stiffeners, and regions where three dimensional representation is required. The approximate solutions corresponding to the hierarchic sequence of models converge to the exact solution of the fully three dimensional model. The stopping criterion is based on: (1) estimation of the relative error in energy norm; (2) equilibrium tests, and (3) observation of the convergence of quantities of interest.

  15. On steady motion of viscoelastic fluid of Oldroyd type

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranovskii, E. S., E-mail: esbaranovskii@gmail.com

    2014-06-01

    We consider a mathematical model describing the steady motion of a viscoelastic medium of Oldroyd type under the Navier slip condition at the boundary. In the rheological relation, we use the objective regularized Jaumann derivative. We prove the solubility of the corresponding boundary-value problem in the weak setting. We obtain an estimate for the norm of a solution in terms of the data of the problem. We show that the solution set is sequentially weakly closed. Furthermore, we give an analytic solution of the boundary-value problem describing the flow of a viscoelastic fluid in a flat channel under a slipmore » condition at the walls. Bibliography: 13 titles. (paper)« less

  16. Decay estimates of solutions to the bipolar non-isentropic compressible Euler-Maxwell system

    NASA Astrophysics Data System (ADS)

    Tan, Zhong; Wang, Yong; Tong, Leilei

    2017-10-01

    We consider the global existence and large time behavior of solutions near a constant equilibrium state to the bipolar non-isentropic compressible Euler-Maxwell system in {R}3 , where the background magnetic field could be non-zero. The global existence is established under the assumption that the H 3 norm of the initial data is small, but its higher order derivatives could be large. Combining the negative Sobolev (or Besov) estimates with the interpolation estimates, we prove the optimal time decay rates of the solution and its higher order spatial derivatives. In this sense, our results improve the similar ones in Wang et al (2012 SIAM J. Math. Anal. 44 3429-57).

  17. Sensitivities of Soap Solutions in Leak Detection

    NASA Technical Reports Server (NTRS)

    Stuck, D.; Lam, D. Q.; Daniels, C.

    1985-01-01

    Document describes method for determining minimum leak rate to which soap-solution leak detectors sensitive. Bubbles formed at smaller leak rates than previously assumed. In addition to presenting test results, document discusses effects of joint-flange configurations, properties of soap solutions, and correlation of test results with earlier data.

  18. The emergence of a global right to health norm--the unresolved case of universal access to quality emergency obstetric care.

    PubMed

    Hammonds, Rachel; Ooms, Gorik

    2014-02-27

    The global response to HIV suggests the potential of an emergent global right to health norm, embracing shared global responsibility for health, to assist policy communities in framing the obligations of the domestic state and the international community. Our research explores the extent to which this global right to health norm has influenced the global policy process around maternal health rights, with a focus on universal access to emergency obstetric care. In examining the extent to which arguments stemming from a global right to health norm have been successful in advancing international policy on universal access to emergency obstetric care, we looked at the period from 1985 to 2013 period. We adopted a qualitative case study approach applying a process-tracing methodology using multiple data sources, including an extensive literature review and limited key informant interviews to analyse the international policy agenda setting process surrounding maternal health rights, focusing on emergency obstetric care. We applied John Kingdon's public policy agenda setting streams model to analyse our data. Kingdon's model suggests that to succeed as a mobilising norm, the right to health could work if it can help bring the problem, policy and political streams together, as it did with access to AIDS treatment. Our analysis suggests that despite a normative grounding in the right to health, prioritisation of the specific maternal health entitlements remains fragmented. Despite United Nations recognition of maternal mortality as a human rights issue, the relevant policy communities have not yet managed to shift the policy agenda to prioritise the global right to health norm of shared responsibility for realising access to emergency obstetric care. The experience of HIV advocates in pushing for global solutions based on right to health principles, including participation, solidarity and accountability; suggest potential avenues for utilising right to health based arguments to push for policy priority for universal access to emergency obstetric care in the post-2015 global agenda.

  19. New algorithms to compute the nearness symmetric solution of the matrix equation.

    PubMed

    Peng, Zhen-Yun; Fang, Yang-Zhi; Xiao, Xian-Wei; Du, Dan-Dan

    2016-01-01

    In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix [Formula: see text] in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix [Formula: see text] is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763-777, 2005) and Peng (Int J Comput Math 87: 1820-1830, 2010).

  20. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  1. Design and Analysis of Optimal Ascent Trajectories for Stratospheric Airships

    NASA Astrophysics Data System (ADS)

    Mueller, Joseph Bernard

    Stratospheric airships are lighter-than-air vehicles that have the potential to provide a long-duration airborne presence at altitudes of 18-22 km. Designed to operate on solar power in the calm portion of the lower stratosphere and above all regulated air traffic and cloud cover, these vehicles represent an emerging platform that resides between conventional aircraft and satellites. A particular challenge for airship operation is the planning of ascent trajectories, as the slow moving vehicle must traverse the high wind region of the jet stream. Due to large changes in wind speed and direction across altitude and the susceptibility of airship motion to wind, the trajectory must be carefully planned, preferably optimized, in order to ensure that the desired station be reached within acceptable performance bounds of flight time and energy consumption. This thesis develops optimal ascent trajectories for stratospheric airships, examines the structure and sensitivity of these solutions, and presents a strategy for onboard guidance. Optimal ascent trajectories are developed that utilize wind energy to achieve minimum-time and minimum-energy flights. The airship is represented by a three-dimensional point mass model, and the equations of motion include aerodynamic lift and drag, vectored thrust, added mass effects, and accelerations due to mass flow rate, wind rates, and Earth rotation. A representative wind profile is developed based on historical meteorological data and measurements. Trajectory optimization is performed by first defining an optimal control problem with both terminal and path constraints, then using direct transcription to develop an approximate nonlinear parameter optimization problem of finite dimension. Optimal ascent trajectories are determined using SNOPT for a variety of upwind, downwind, and crosswind launch locations. Results of extensive optimization solutions illustrate definitive patterns in the ascent path for minimum time flights across varying launch locations, and show that significant energy savings can be realized with minimum-energy flights, compared to minimum-time time flights, given small increases in flight time. The performance of the optimal trajectories are then studied with respect to solar energy production during ascent, as well as sensitivity of the solutions to small changes in drag coefficient and wind model parameters. Results of solar power model simulations indicate that solar energy is sufficient to power ascent flights, but that significant energy loss can occur for certain types of trajectories. Sensitivity to the drag and wind model is approximated through numerical simulations, showing that optimal solutions change gradually with respect to changing wind and drag parameters and providing deeper insight into the characteristics of optimal airship flights. Finally, alternative methods are developed to generate near-optimal ascent trajectories in a manner suitable for onboard implementation. The structures and characteristics of previously developed minimum-time and minimum-energy ascent trajectories are used to construct simplified trajectory models, which are efficiently solved in a smaller numerical optimization problem. Comparison of these alternative solutions to the original SNOPT solutions show excellent agreement, suggesting the alternate formulations are an effective means to develop near-optimal solutions in an onboard setting.

  2. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  3. Cultural bias and liberal neutrality: reconsidering the relationship between religion and liberalism through the lens of the physician-assisted suicide debate.

    PubMed

    Jones, Robert P

    2002-01-01

    Liberals often view religion chiefly as "a problem" for democratic discourse in modern pluralistic societies and propose an allegedly neutral solution in the form of philosophical distinctions between "the right" and "the good" or populist invocations of a "right to choose." Drawing on cultural theory and ethnographic research among activists in the Oregon debates over the legalization of physician-assisted suicide, I demonstrate that liberal "neutrality" harbors its own cultural bias, flattens the complexity of public debates, and undermines liberalism's own commitments to equality. I conclude that the praiseworthy liberal goal of impartiality in policy decisions would best be met not by the inaccessible norm of neutrality but by a norm of inclusivity, which intentionally solicits multiple cultural perspectives.

  4. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  5. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  6. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  7. Assessment of the Anthropometric Accommodation Requirements of Non-Pilot Aircrew in the CC-150 Polaris, CP-140 Aurora, CH-149 Cormorant and CC-130 Hercules Aircraft (Exigences Anthropometriques Pour le Personnel Navigant dans le CC-150 Polaris, CP-140 Aurora, CH-149 Cormorant et CC-130 Hercules)

    DTIC Science & Technology

    2008-10-01

    et planification en ressources humaines militaires a aboli la norme de taille minimum des Forces Canadiennes. On a conclu que "les...015; Defence R&D Canada – Toronto; October 2008. Introduction ou contexte : En février 2002, le directeur général – politiques et planification en...arming cables. ....................................................... 6 Figure 4 Reach of full throttle (left) and fire bottle T -handles (right

  8. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  10. On the numerical solution of the dynamically loaded hydrodynamic lubrication of the point contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.

    1990-01-01

    The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.

  11. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time

    PubMed Central

    Avellar, Gustavo S. C.; Pereira, Guilherme A. S.; Pimenta, Luciano C. A.; Iscold, Paulo

    2015-01-01

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem’s (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles’ maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs. PMID:26540055

  12. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time.

    PubMed

    Avellar, Gustavo S C; Pereira, Guilherme A S; Pimenta, Luciano C A; Iscold, Paulo

    2015-11-02

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem's (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles' maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs.

  13. Chemotaxis with logistic source

    NASA Astrophysics Data System (ADS)

    Winkler, Michael

    2008-12-01

    We consider the chemotaxis system in a smooth bounded domain , where [chi]>0 and g generalizes the logistic function g(u)=Au-bu[alpha] with [alpha]>1, A[greater-or-equal, slanted]0 and b>0. A concept of very weak solutions is introduced, and global existence of such solutions for any nonnegative initial data u0[set membership, variant]L1([Omega]) is proved under the assumption that . Moreover, boundedness properties of the constructed solutions are studied. Inter alia, it is shown that if b is sufficiently large and u0[set membership, variant]L[infinity]([Omega]) has small norm in L[gamma]([Omega]) for some then the solution is globally bounded. Finally, in the case that additionally holds, a bounded set in L[infinity]([Omega]) can be found which eventually attracts very weak solutions emanating from arbitrary L1 initial data. The paper closes with numerical experiments that illustrate some of the theoretically established results.

  14. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  15. Diplomatic Solutions to Additive Challenges

    DTIC Science & Technology

    Additive manufacturing (AM) technology, colloquially known as 3D printing , will bring significant benefits to society, but also poses great risks...regimes, are not sufficient to address the challenges presented by 3D printing technology. The DOS should evaluate and promote unconventional strategies...from printed weapons proliferation. Working with other nations to resolve the appropriate balance between development and security, and to promote norms

  16. A Typology of Teacher-Rated Child Behavior: Revisiting Subgroups over 10 Years Later

    ERIC Educational Resources Information Center

    DiStefano, Christine A.; Kamphaus, Randy W.; Mindrila, Diana L.

    2010-01-01

    The purpose of this article was to examine a typology of child behavior using the Behavioral Assessment System for Children, Teacher Rating Scale (BASC TRS-C, 2nd edition; Reynolds & Kamphaus, 2004). The typology was compared with the solution identified from the 1992 BASC TRS-C norm dataset. Using cluster analysis, a seven-cluster solution…

  17. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  18. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  19. A new weak Galerkin finite element method for elliptic interface problems

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu; ...

    2016-08-26

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  20. A new weak Galerkin finite element method for elliptic interface problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  1. New 2D dilaton gravity for nonsingular black holes

    NASA Astrophysics Data System (ADS)

    Kunstatter, Gabor; Maeda, Hideki; Taves, Tim

    2016-05-01

    We construct a two-dimensional action that is an extension of spherically symmetric Einstein-Lanczos-Lovelock (ELL) gravity. The action contains arbitrary functions of the areal radius and the norm squared of its gradient, but the field equations are second order and obey Birkhoff’s theorem. In complete analogy with spherically symmetric ELL gravity, the field equations admit the generalized Misner-Sharp mass as the first integral that determines the form of the vacuum solution. The arbitrary functions in the action allow for vacuum solutions that describe a larger class of interesting nonsingular black hole spacetimes than previously available.

  2. A Review of Methods for Moving Boundary Problems

    DTIC Science & Technology

    2009-07-01

    the bound- ary value problem for the eikonal equation: ‖∇u‖ = 1 for x ∈ Ω (29) u = 0 for x ∈ Γ (30) ERDC/CHL TR-09-10 8 where ‖ ‖ is the Euclidean norm...Solutions of the eikonal equation can in turn be characterized as steady state solutions of the initial value prob- lem ut + sgn(u0)(‖∇u‖ − 1) = 0...LS using the eikonal equation and use the NCI equation for the LS dynam- ics. The complete system of equations in weak form is ∫ Ω (‖∇u‖ − 1)wdV = 0

  3. Effect of geometry on hydrodynamic film thickness

    NASA Technical Reports Server (NTRS)

    Brewe, D. E.; Hamrock, B. J.; Taylor, C. M.

    1978-01-01

    The influence of geometry on the isothermal hydrodynamic film separating two rigid solids was investigated. Pressure-viscosity effects were not considered. The minimum film thickness is derived for fully flooded conjunctions by using the Reynolds conditions. It was found that the minimum film thickness had the same speed, viscosity, and load dependence as Kapitza's classical solution. However, the incorporation of Reynolds boundary conditions resulted in an additional geometry effect. Solutions using the parabolic film approximation are compared with those using the exact expression for the film in the analysis. Contour plots are shown that indicate in detail the pressure developed between the solids.

  4. Effect of geometry on hydrodynamic film thickness

    NASA Technical Reports Server (NTRS)

    Brewe, D. E.; Hamrock, B. J.; Taylor, C. M.

    1978-01-01

    The influence of geometry on the isothermal hydrodynamic film separating two rigid solids was investigated. Pressure-viscosity effects were not considered. The minimum film thickness is derived for fully flooded conjunctions by using the Reynolds boundary conditions. It was found that the minimum film thickness had the same speed, viscosity, and load dependence as Kapitza's classical solution. However, the incorporation of Reynolds boundary conditions resulted in an additional geometry effect. Solutions using the parabolic film approximation are compared with those using the exact expression for the film in the analysis. Contour plots are shown that indicate in detail the pressure developed between the solids.

  5. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  6. Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity.

    PubMed

    Santos, Fabiane Igansi de Castro Dos; Marini, Naciele; Santos, Railson Schreinert Dos; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio; de Oliveira, Antonio Costa

    2018-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here.

  7. Selection and testing of reference genes for accurate RT-qPCR in rice seedlings under iron toxicity

    PubMed Central

    dos Santos, Fabiane Igansi de Castro; Marini, Naciele; dos Santos, Railson Schreinert; Hoffman, Bianca Silva Fernandes; Alves-Ferreira, Marcio

    2018-01-01

    Reverse Transcription quantitative PCR (RT-qPCR) is a technique for gene expression profiling with high sensibility and reproducibility. However, to obtain accurate results, it depends on data normalization by using endogenous reference genes whose expression is constitutive or invariable. Although the technique is widely used in plant stress analyzes, the stability of reference genes for iron toxicity in rice (Oryza sativa L.) has not been thoroughly investigated. Here, we tested a set of candidate reference genes for use in rice under this stressful condition. The test was performed using four distinct methods: NormFinder, BestKeeper, geNorm and the comparative ΔCt. To achieve reproducible and reliable results, Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed. Valid reference genes were found for shoot (P2, OsGAPDH and OsNABP), root (OsEF-1a, P8 and OsGAPDH) and root+shoot (OsNABP, OsGAPDH and P8) enabling us to perform further reliable studies for iron toxicity in both indica and japonica subspecies. The importance of the study of other than the traditional endogenous genes for use as normalizers is also shown here. PMID:29494624

  8. Moving Forward with School Nutrition Policies: A Case Study of Policy Adherence in Nova Scotia.

    PubMed

    McIsaac, Jessie-Lee D; Shearer, Cindy L; Veugelers, Paul J; Kirk, Sara F L

    2015-12-01

    Many Canadian school jurisdictions have developed nutrition policies to promote health and improve the nutritional status of children, but research is needed to clarify adherence, guide practice-related decisions, and move policy action forward. The purpose of this research was to evaluate policy adherence with a review of online lunch menus of elementary schools in Nova Scotia (NS) while also providing transferable evidence for other jurisdictions. School menus in NS were scanned and a list of commonly offered items were categorized, according to minimum, moderate, or maximum nutrition categories in the NS policy. The results of the menu review showed variability in policy adherence that depended on food preparation practices by schools. Although further research is needed to clarify preparation practices, the previously reported challenges of healthy food preparations (e.g., cost, social norms) suggest that many schools in NS are likely not able to use these healthy preparations, signifying potential noncompliance to the policy. Leadership and partnerships are needed among researchers, policy makers, and nutrition practitioners to address the complexity of issues related to food marketing and social norms that influence school food environments to inspire a culture where healthy and nutritious food is available and accessible to children.

  9. Well-posedness and decay for the dissipative system modeling electro-hydrodynamics in negative Besov spaces

    NASA Astrophysics Data System (ADS)

    Zhao, Jihong; Liu, Qiao

    2017-07-01

    In Guo and Wang (2012) [10], Y. Guo and Y. Wang developed a general new energy method for proving the optimal time decay rates of the solutions to dissipative equations. In this paper, we generalize this method in the framework of homogeneous Besov spaces. Moreover, we apply this method to a model arising from electro-hydrodynamics, which is a strongly coupled system of the Navier-Stokes equations and the Poisson-Nernst-Planck equations through charge transport and external forcing terms. We show that some weighted negative Besov norms of solutions are preserved along time evolution, and obtain the optimal time decay rates of the higher-order spatial derivatives of solutions by the Fourier splitting approach and the interpolation techniques.

  10. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  11. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong

    2016-12-01

    Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

  12. Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization

    PubMed Central

    Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan

    2017-01-01

    In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770

  13. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  14. Minimum Weight Design of a Leaf Spring Tapered in Thickness and Width for the Hubble Space Telescope-Space Support Equipment

    NASA Technical Reports Server (NTRS)

    Rodriguez, P. I.

    1990-01-01

    A linear elastic solution to the problem of minimum weight design of cantilever beams with variable width and depth is presented. The solution shown is for the specific application of the Hubble Space Telescope maintenance mission hardware. During these maintenance missions, delicate instruments must be isolated from the potentially damaging vibration environment of the space shuttle cargo bay during the ascent and descent phases. The leaf springs are designed to maintain the isolation system natural frequency at a level where load transmission to the instruments in a minimum. Nonlinear programming is used for the optimization process. The weight of the beams is the objective function with the deflection and allowable bending stress as the constraint equations. The design variables are the width and depth of the beams at both the free and the fixed ends.

  15. Rotator Cuff Strength Ratio and Injury in Glovebox Workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Amelia M.

    Rotator cuff integrity is critical to shoulder health. Due to the high workload imposed upon the shoulder while working in an industrial glovebox, this study investigated the strength ratio of the rotator cuff muscles in glovebox workers and compared this ratio to the healthy norm. Descriptive statistics were collected using a short questionnaire. Handheld dynamometry was used to quantify the ratio of forces produced in the motions of shoulder internal and external rotation. Results showed this population to have shoulder strength ratios that were significantly different from the healthy norm. The deviation from the normal ratio demonstrates the need formore » solutions designed to reduce the workload on the rotator cuff musculature of glovebox workers in order to improve health and safety. Assessment of strength ratios can be used to screen for risk of symptom development.« less

  16. Direct prediction of the solute softening-to-hardening transition in W–Re alloys using stochastic simulations of screw dislocation motion

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Marian, Jaime

    2018-06-01

    Interactions among dislocations and solute atoms are the basis of several important processes in metal plasticity. In body-centered cubic (bcc) metals and alloys, low-temperature plastic flow is controlled by screw dislocation glide, which is known to take place by the nucleation and sideward relaxation of kink pairs across two consecutive Peierls valleys. In alloys, dislocations and solutes affect each other’s kinetics via long-range stress field coupling and short-range inelastic interactions. It is known that in certain substitutional bcc alloys a transition from solute softening to solute hardening is observed at a critical concentration. In this paper, we develop a kinetic Monte Carlo model of screw dislocation glide and solute diffusion in substitutional W–Re alloys. We find that dislocation kinetics is governed by two competing mechanisms. At low solute concentrations, nucleation is enhanced by the softening of the Peierls stress, which dominates over the elastic repulsion of Re atoms on kinks. This trend is reversed at higher concentrations, resulting in a minimum in the flow stress that is concentration and temperature dependent. This minimum marks the transition from solute softening to hardening, which is found to be in reasonable agreement with experiments.

  17. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  18. A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography

    PubMed Central

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987

  19. A dictionary learning approach with overlap for the low dose computed tomography reconstruction and its vectorial application to differential phase tomography.

    PubMed

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.

  20. Dwell time algorithm based on the optimization theory for magnetorheological finishing

    NASA Astrophysics Data System (ADS)

    Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen

    2010-10-01

    Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.

  1. Policy, COIN Doctrine, and Political Legitimacy

    DTIC Science & Technology

    2012-12-01

    munal. Individual. In individualistic cultures there is an “I” consciousness. ● Identity is an individual matter, often the more individualistic ...or not democratic at all. Societies with individualistic value systems prefer liberal democratic governance built on the idea that the government...how considering cultural norms and values can lead to seeing other forms of legitimacy as viable solutions. We first try to determine which value

  2. Quasi-static responses and variational principles in gradient plasticity

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc-Son

    2016-12-01

    Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.

  3. Optimal rendezvous in the neighborhood of a circular orbit

    NASA Technical Reports Server (NTRS)

    Jones, J. B.

    1975-01-01

    The minimum velocity change rendezvous solutions, when the motion may be linearized about a circular orbit, fall into two separate regions; the phase-for-free region and the general region. Phase-for-free solutions are derived from the optimum transfer solutions, require the same velocity change expenditure, but may not be unique. Analytic solutions are presented in two of the three subregions. An algorithm is presented for determining the unique solutions in the general region. Various sources of initial conditions are discussed and three examples presented.

  4. Optimal impulsive manoeuvres and aerodynamic braking

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1985-01-01

    A method developed for obtaining solutions to the aerodynamic braking problem, using impulses in the exoatmospheric phases is discussed. The solution combines primer vector theory and the results of a suboptimal atmospheric guidance program. For a specified initial and final orbit, the solution determines: (1) the minimum impulsive cost using a maximum of four impulses, (2) the optimal atmospheric entry and exit-state vectors subject to equality and inequality constraints, and (3) the optimal coast times. Numerical solutions which illustrate the characteristics of the solution are presented.

  5. A heuristic for suffix solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilgory, A.; Gajski, D.D.

    1986-01-01

    The suffix problem has appeared in solutions of recurrence systems for parallel and pipelined machines and more recently in the design of gate and silicon compilers. In this paper the authors present two algorithms. The first algorithm generates parallel suffix solutions with minimum cost for a given length, time delay, availability of initial values, and fanout. This algorithm generates a minimal solution for any length n and depth range log/sub 2/ N to N. The second algorithm reduces the size of the solutions generated by the first algorithm.

  6. Observations of non-linear plasmon damping in dense plasmas

    NASA Astrophysics Data System (ADS)

    Witte, B. B. L.; Sperling, P.; French, M.; Recoules, V.; Glenzer, S. H.; Redmer, R.

    2018-05-01

    We present simulations using finite-temperature density-functional-theory molecular-dynamics to calculate dynamic dielectric properties in warm dense aluminum. The comparison between exchange-correlation functionals in the Perdew, Burke, Ernzerhof approximation, Strongly Constrained and Appropriately Normed Semilocal Density Functional, and Heyd, Scuseria, Ernzerhof (HSE) approximation indicates evident differences in the electron transition energies, dc conductivity, and Lorenz number. The HSE calculations show excellent agreement with x-ray scattering data [Witte et al., Phys. Rev. Lett. 118, 225001 (2017)] as well as dc conductivity and absorption measurements. These findings demonstrate non-Drude behavior of the dynamic conductivity above the Cooper minimum that needs to be taken into account to determine optical properties in the warm dense matter regime.

  7. Boiling characteristics of dilute polymer solutions and implications for the suppression of vapor explosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bang, K.H.; Kim, M.H.

    Quenching experiments of hot solid spheres in dilute aqueous solutions of polyethylene oxide polymer have been conducted for the purpose of investigating the physical mechanisms of the suppression of vapor explosions in this polymer solutions. Two spheres of 22.2mm and 9.5mm-diameter were tested in the polymer solutions of various concentrations at 30{degrees}C. Minimum film boiling temperature ({Delta}T{sub MFB}) in this highly-subcooled liquid rapidly decreased from over 700{degrees}c for pure water to about 150{degrees}C as the polymer concentration was increased up to 300ppm for 22.2mm sphere, and it decreased to 350{degrees}C for 9.5mm sphere. This rapid reduction of minimum film boilingmore » temperature in the PEO aqueous solutions can explain its ability of the suppression of spontaneous vapor explosions. The ability of suppression of vapor explosions by dilute polyethylene oxide solutions against an external trigger pressure was tested by dropping molten tin into the polymer solutions at 25{degrees}C. It was observed that in 50ppm solutions more mass fragmented than in pure water, but produced weaker explosion pressures. The explosion was completely suppressed in 300ppm solutions with the external trigger. The debris size distributions of fine fragments smaller than 0.7mm were shown almost identical regardless of the polymer concentrations.« less

  8. The primary prevention of alcohol problems: a critical review of the research literature.

    PubMed

    Moskowitz, J M

    1989-01-01

    The research evaluating the effects of programs and policies in reducing the incidence of alcohol problems is critically reviewed. Four types of preventive interventions are examined including: (1) policies affecting the physical, economic and social availability of alcohol (e.g., minimum legal drinking age, price and advertising of alcohol), (2) formal social controls on alcohol-related behavior (e.g., drinking-driving laws), (3) primary prevention programs (e.g., school-based alcohol education), and (4) environmental safety measures (e.g., automobile airbags). The research generally supports the efficacy of three alcohol-specific policies: raising the minimum legal drinking age to 21, increasing alcohol taxes and increasing the enforcement of drinking-driving laws. Also, research suggests that various environmental safety measures reduce the incidence of alcohol-related trauma. In contrast, little evidence currently exists to support the efficacy of primary prevention programs. However, a systems perspective of prevention suggests that prevention programs may become more efficacious after widespread adoption of prevention policies that lead to shifts in social norms regarding use of beverage alcohol.

  9. The emergence of a global right to health norm – the unresolved case of universal access to quality emergency obstetric care

    PubMed Central

    2014-01-01

    Background The global response to HIV suggests the potential of an emergent global right to health norm, embracing shared global responsibility for health, to assist policy communities in framing the obligations of the domestic state and the international community. Our research explores the extent to which this global right to health norm has influenced the global policy process around maternal health rights, with a focus on universal access to emergency obstetric care. Methods In examining the extent to which arguments stemming from a global right to health norm have been successful in advancing international policy on universal access to emergency obstetric care, we looked at the period from 1985 to 2013 period. We adopted a qualitative case study approach applying a process-tracing methodology using multiple data sources, including an extensive literature review and limited key informant interviews to analyse the international policy agenda setting process surrounding maternal health rights, focusing on emergency obstetric care. We applied John Kingdon's public policy agenda setting streams model to analyse our data. Results Kingdon’s model suggests that to succeed as a mobilising norm, the right to health could work if it can help bring the problem, policy and political streams together, as it did with access to AIDS treatment. Our analysis suggests that despite a normative grounding in the right to health, prioritisation of the specific maternal health entitlements remains fragmented. Conclusions Despite United Nations recognition of maternal mortality as a human rights issue, the relevant policy communities have not yet managed to shift the policy agenda to prioritise the global right to health norm of shared responsibility for realising access to emergency obstetric care. The experience of HIV advocates in pushing for global solutions based on right to health principles, including participation, solidarity and accountability; suggest potential avenues for utilising right to health based arguments to push for policy priority for universal access to emergency obstetric care in the post-2015 global agenda. PMID:24576008

  10. Fast Diffusion to Self-Similarity: Complete Spectrum, Long-Time Asymptotics, and Numerology

    NASA Astrophysics Data System (ADS)

    Denzler, Jochen; McCann, Robert J.

    2005-03-01

    The complete spectrum is determined for the operator on the Sobolev space W1,2ρ(Rn) formed by closing the smooth functions of compact support with respect to the norm Here the Barenblatt profile ρ is the stationary attractor of the rescaled diffusion equation in the fast, supercritical regime m the same diffusion dynamics represent the steepest descent down an entropy E(u) on probability measures with respect to the Wasserstein distance d2. Formally, the operator H=HessρE is the Hessian of this entropy at its minimum ρ, so the spectral gap H≧α:=2-n(1-m) found below suggests the sharp rate of asymptotic convergence: from any centered initial data 0≦u(0,x) ∈ L1(Rn) with second moments. This bound improves various results in the literature, and suggests the conjecture that the self-similar solution u(t,x)=R(t)-nρ(x/R(t)) is always slowest to converge. The higher eigenfunctions which are polynomials with hypergeometric radial parts and the presence of continuous spectrum yield additional insight into the relations between symmetries of Rn and the flow. Thus the rate of convergence can be improved if we are willing to replace the distance to ρ with the distance to its nearest mass-preserving dilation (or still better, affine image). The strange numerology of the spectrum is explained in terms of the number of moments of ρ.

  11. The inverse problem in electroencephalography using the bidomain model of electrical activity.

    PubMed

    Lopez Rincon, Alejandro; Shimoda, Shingo

    2016-12-01

    Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  12. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  13. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  14. The transformation of weak saturated soils using piles-drains for improving its mechanical properties

    NASA Astrophysics Data System (ADS)

    Ter-Martirosyan, Z. G.; Ter-Martirosyan, A. Z.; Sidorov, V. V.

    2018-04-01

    In practice of increased responsibility structures design there are often weak saturated clayey soils with low characteristics of deformability and strength take place on the construction site. In these cases, foundations using piles-drains of sandy or coarse material are recommended by norms, which is able to bear the load and to accelerate the consolidation process. The presented solutions include an analytical solution of the interaction problem between piles and slab raft foundation with the surrounding soil of the base with the possibility of extension of pile shaft. The closed-form solutions to determine the stresses in pile shaft and in the soil under the foundation slab are obtained. The article presents the results of large scale tests in the pilot area construction of major energy facilities in Russia.

  15. Teaching with Spreadsheets: An Example from Heat Transfer.

    ERIC Educational Resources Information Center

    Drago, Peter

    1993-01-01

    Provides an activity which measures the heat transfer through an insulated cylindrical tank, allowing the student to gain a better knowledge of both the physics involved and the working of spreadsheets. Provides both a spreadsheet solution and a maximum-minimum method of solution for the problem. (MVL)

  16. Matrix Recipes for Hard Thresholding Methods

    DTIC Science & Technology

    2012-11-07

    have been proposed to approximate the solution. In [11], Donoho et al . demonstrate that, in the sparse approximation problem, under basic incoherence...inducing convex surrogate ‖ · ‖1 with provable guarantees for unique signal recovery. In the ARM problem, Fazel et al . [12] identified the nuclear norm...sparse recovery for all. Technical report, EPFL, 2011 . [25] N. Halko , P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic

  17. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  18. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  19. Scheduling policies of intelligent sensors and sensor/actuators in flexible structures

    NASA Astrophysics Data System (ADS)

    Demetriou, Michael A.; Potami, Raffaele

    2006-03-01

    In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.

  20. Men's violence against women and men are inter-related: Recommendations for simultaneous intervention

    PubMed Central

    Fleming, Paul J.; Gruskin, Sofia; Rojo, Florencia; Dworkin, Shari L.

    2015-01-01

    Men are more likely than women to perpetrate nearly all types of interpersonal violence (e.g. intimate partner violence, murder, assault, rape). While public health programs target prevention efforts for each type of violence, there are rarely efforts that approach the prevention of violence holistically and attempt to tackle its common root causes. Drawing upon theories that explain the drivers of violence, we examine how gender norms, including norms and social constructions of masculinity, are at the root of most physical violence perpetration by men against women and against other men. We then argue that simply isolating each type of violence and constructing separate interventions for each type is inefficient and less effective. We call for recognition of the commonalities found across the drivers of different types of violence and make intervention recommendations with the goal of seeking more long-standing solutions to violence prevention. PMID:26482359

  1. H2-norm for mesh optimization with application to electro-thermal modeling of an electric wire in automotive context

    NASA Astrophysics Data System (ADS)

    Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia

    2017-04-01

    In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.

  2. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  3. The ecology and evolution of temperature-dependent reaction norms for sex determination in reptiles: a mechanistic conceptual model.

    PubMed

    Pezaro, Nadav; Doody, J Sean; Thompson, Michael B

    2017-08-01

    Sex-determining mechanisms are broadly categorised as being based on either genetic or environmental factors. Vertebrate sex determination exhibits remarkable diversity but displays distinct phylogenetic patterns. While all eutherian mammals possess XY male heterogamety and female heterogamety (ZW) is ubiquitous in birds, poikilothermic vertebrates (fish, amphibians and reptiles) exhibit multiple genetic sex-determination (GSD) systems as well as environmental sex determination (ESD). Temperature is the factor controlling ESD in reptiles and temperature-dependent sex determination (TSD) in reptiles has become a focal point in the study of this phenomenon. Current patterns of climate change may cause detrimental skews in the population sex ratios of reptiles exhibiting TSD. Understanding the patterns of variation, both within and among populations and linking such patterns with the selection processes they are associated with, is the central challenge of research aimed at predicting the capacity of populations to adapt to novel conditions. Here we present a conceptual model that innovates by defining an individual reaction norm for sex determination as a range of incubation temperatures. By deconstructing individual reaction norms for TSD and revealing their underlying interacting elements, we offer a conceptual solution that explains how variation among individual reaction norms can be inferred from the pattern of population reaction norms. The model also links environmental variation with the different patterns of TSD and describes the processes from which they may arise. Specific climate scenarios are singled out as eco-evolutionary traps that may lead to demographic extinction or a transition to either male or female heterogametic GSD. We describe how the conceptual principles can be applied to interpret TSD data and to explain the adaptive capacity of TSD to climate change as well as its limits and the potential applications for conservation and management programs. © 2016 Cambridge Philosophical Society.

  4. Stateline: Critical Mass

    ERIC Educational Resources Information Center

    Christie, Kathy

    2005-01-01

    In Physics "critical mass" refers to the minimum amount of fissionable material required to sustain a chain reaction. The adoption of state education policy isn't often equated with this concept, but occasionally solutions and ideas seem to gather around a common problem. If the solution at hand is simple, easily understood, and…

  5. Geodesy in Antarctica: A pilot study based on the TAMDEF GPS network, Victoria Land, Antarctica

    NASA Astrophysics Data System (ADS)

    Vazquez Becerra, Guadalupe Esteban

    The objective of the research presented in this dissertation is a combination of practical and theoretical problems to investigate unique aspects of GPS (Global Positioning System) geodesy in Antarctica. This is derived from a complete analysis of a GPS network called TAMDEF (Trans Antarctic Mountains Deformation), located in Victoria Land, Antarctica. In order to permit access to the International Terrestrial Reference Frame (ITRF), the McMurdo (MCM4) IGS (The International GNSS Service for Geodynamics, formerly the International GPS Service) site was adopted as part of the TAMDEF network. The following scientific achievements obtained from the cited analysis will be discussed as follows: (1) The GPS data processing for the TAMDEF network relied on the PAGES (Program for Adjustment of GPS Ephemerides) software that uses the double-differenced iono-free linear combination, which helps removing a big partial of bias (mm level) in the final positioning. (2) To validate the use of different antenna types in TAMDEF, an antenna testing experiment was conducted using the National Geodetic Survey (NGS) antenna calibration data, appropriate for each antenna type. Sub-daily and daily results from the antenna testing are at the sub-millimeter level, based on the fact that 24-hour solutions were used to average any possible bias. (3) A potential contributor that might have an impact on the TAMDEF stations positioning is the pseudorange multipath effect; thus, the root mean squared variations were estimated and analyzed in order to identify the most and least affected sites. MCM4 was found to be the site with highest multipath, and this is not good at all, since MCM4 is the primary ITRF access point for this part of Antarctica. Additionally, results from the pseudorange multipath can be used for further data cleaning to improve positioning results. (4) The Ocean Tide Modeling relied on the use of two models: CATS02.01 (Circum Antarctic Tidal Simulation) and TPXO6.2 (TOPEX/Poseidon) to investigate which model suits the Antarctic conditions best and its effect on the vertical coordinate component at the TAMDEF sites. (5) The scatter for the time-series results of the coordinate components for the TAMDEF sites are smaller when processed with respect to the Antarctic tectonic plate (Case I), in comparison with the other tectonic plates outside Antarctica (Case II-IV). Also, the seasonal effect due to the time-series seen in the TAMDEF sites with longer data span are site dependent; thus, data processing is not the reason for these effects. (6) Furthermore, the results coming from a homogeneous global network with coordinates referred and transformed to the ITRF2000 at epoch 2005.5 reflect the quality of the solution, obtained when processing TAMDEF network data with respect to the Antarctic tectonic plate. (7) An optimal data reduction strategy was developed, based on three different troposphere models and mapping functions, tested and used to estimate the total wet zenith delay (TWZD) which later was transformed to precipitable water vapor (PWV). PWV was estimated from GPS measurements and validated with a numerical weather model, AMPS (Antarctic Mesoscale Prediction System) and radiosonde PWV. Additionally, to validate the TWZD estimates at the MCM4 site before their conversion into the GPS PWV, these estimates were directly compared to TWZD computed by the CDDIS (Crustal Dynamics Data Information System) analysis center. (8) The results from the Least-Squares adjustment with Stochastic Constraints (SCLESS) as performed with PAGES are very comparable (mm-level) to those obtained from the alternative adjustment approaches: MINOLESS (Minimum-Norm Least-Squares adjustment); Partial-MINOLESS (Partial Minimum-Norm Least-Squares adjustment), and BLIMPBE (Best Linear Minimum Partial-Bias Estimation). Based on the applied network adjustment models within the Antarctic tectonic plate (Case I), it can be demonstrated that the GPS data used are clean of bias after proper care has been taken of ionosphere, troposphere, multipath, and some other sources that affect GPS positioning. Overall, it can be concluded that no suspected of bias was present in the obtained results, thus, GPS is indeed capable of capturing the signal which can be used for further geophysical interpretation within Antarctica.

  6. Numerical solution of open string field theory in Schnabl gauge

    NASA Astrophysics Data System (ADS)

    Arroyo, E. Aldo; Fernandes-Silva, A.; Szitas, R.

    2018-01-01

    Using traditional Virasoro L 0 level-truncation computations, we evaluate the open bosonic string field theory action up to level (10 , 30). Extremizing this level-truncated potential, we construct a numerical solution for tachyon condensation in Schnabl gauge. We find that the energy associated to the numerical solution overshoots the expected value -1 at level L = 6. Extrapolating the level-truncation data for L ≤ 10 to estimate the vacuum energies for L > 10, we predict that the energy reaches a minimum value at L ˜ 12, and then turns back to approach -1 asymptotically as L → ∞. Furthermore, we analyze the tachyon vacuum expectation value (vev), for which by extrapolating its corresponding level-truncation data, we predict that the tachyon vev reaches a minimum value at L ˜ 26, and then turns back to approach the expected analytical result as L → ∞.

  7. SF-36v2 norms and its' discriminative properties among healthy households of tuberculosis patients in Malaysia.

    PubMed

    Atif, Muhammad; Sulaiman, Syed Azhar Syed; Shafie, Asrul Akmal; Asif, Muhammad; Ahmad, Nafees

    2013-10-01

    The aim of the study was to obtain norms of the SF-36v2 health survey and the association of summary component scores with socio-demographic variables in healthy households of tuberculosis (TB) patients. All household members (18 years and above; healthy; literate) of registered tuberculosis patients who came for contact tracing during March 2010 to February 2011 at the respiratory clinic of Penang General Hospital were invited to complete the SF-36v2 health survey using the official translation of the questionnaire in Malay, Mandarin, Tamil and English. Scoring of the questionnaire was done using Quality Metric's QM Certified Scoring Software version 4. Multivariate analysis was conducted to uncover the predictors of physical and mental health. A total of 649 eligible respondents were approached, while 525 agreed to participate in the study (response rate = 80.1 %). Out of consenting respondents, 46.5 % were male and only 5.3 % were over 75 years. Internal consistencies met the minimum criteria (α > 0.7). Reliability coefficients of the scales were always less than their own reliability coefficients. Mean physical component summary scale scores were equivalent to United States general population norms. However, there was a difference of more than three norm-based scoring points for mean mental component summary scores indicating poor mental health. A notable proportion of the respondents was at the risk of depression. Respondents aged 75 years and above (p = 0.001; OR 32.847), widow (p = 0.013; OR 2.599) and postgraduates (p < 0.001; OR 7.865) were predictors of poor physical health while unemployment (p = 0.033; OR 1.721) was the only predictor of poor mental health. The SF-36v2 is a valid instrument to assess HRQoL among the households of TB patients. Study findings indicate the existence of poor mental health and risk of depression among family caregivers of TB patients. We therefore recommend that caregivers of TB patients to be offered intensive support and special attention to cope with these emotional problems.

  8. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers

    PubMed Central

    2014-01-01

    Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422

  9. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers.

    PubMed

    Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu

    2014-06-05

    Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.

  10. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  11. Cosmological signature change in Cartan gravity with dynamical symmetry breaking

    NASA Astrophysics Data System (ADS)

    Magueijo, João; Rodríguez-Vázquez, Matías; Westman, Hans; Złośnik, Tom

    2014-03-01

    We investigate the possibility for classical metric signature change in a straightforward generalization of the first-order formulation of gravity, dubbed "Cartan gravity." The mathematical structure of this theory mimics the electroweak theory in that the basic ingredients are an SO(1,4) Yang-Mills gauge field Aabμ and a symmetry breaking Higgs field Va, with no metric or affine structure of spacetime presupposed. However, these structures can be recovered, with the predictions of general relativity exactly reproduced, whenever the Higgs field breaking the symmetry to SO(1,3) is forced to have a constant (positive) norm VaVa. This restriction is usually imposed "by hand," but in analogy with the electroweak theory we promote the gravitational Higgs field Va to a genuine dynamical field, subject to nontrivial equations of motion. Even though we limit ourselves to actions polynomial in these variables, we discover a rich phenomenology. Most notably we derive classical cosmological solutions exhibiting a smooth transition between Euclidean and Lorentzian signature in the four-metric. These solutions are nonsingular and arise whenever the SO(1,4) norm of the Higgs field changes sign; i.e. the signature of the metric of spacetime is determined dynamically by the gravitational Higgs field. It is possible to find a plethora of such solutions and in some of them this dramatic behavior is confined to the early Universe, with the theory asymptotically tending to Einstein gravity at late times. Curiously the theory can also naturally embody a well-known dark energy model: Peebles-Ratra quintessence.

  12. Stability of semidiscrete approximations for hyperbolic initial-boundary-value problems: An eigenvalue analysis

    NASA Technical Reports Server (NTRS)

    Warming, Robert F.; Beam, Richard M.

    1986-01-01

    A hyperbolic initial-boundary-value problem can be approximated by a system of ordinary differential equations (ODEs) by replacing the spatial derivatives by finite-difference approximations. The resulting system of ODEs is called a semidiscrete approximation. A complication is the fact that more boundary conditions are required for the spatially discrete approximation than are specified for the partial differential equation. Consequently, additional numerical boundary conditions are required and improper treatment of these additional conditions can lead to instability. For a linear initial-boundary-value problem (IBVP) with homogeneous analytical boundary conditions, the semidiscrete approximation results in a system of ODEs of the form du/dt = Au whose solution can be written as u(t) = exp(At)u(O). Lax-Richtmyer stability requires that the matrix norm of exp(At) be uniformly bounded for O less than or = t less than or = T independent of the spatial mesh size. Although the classical Lax-Richtmyer stability definition involves a conventional vector norm, there is no known algebraic test for the uniform boundedness of the matrix norm of exp(At) for hyperbolic IBVPs. An alternative but more complicated stability definition is used in the theory developed by Gustafsson, Kreiss, and Sundstrom (GKS). The two methods are compared.

  13. Recovery of NORM from scales generated by oil extraction.

    PubMed

    Al Attar, Lina; Safia, Bassam; Ghani, Basem Abdul; Al Abdulah, Jamal

    2016-03-01

    Scales, containing naturally occurring radioactive materials (NORM), are a major problem in oil production that lead to costly remediation and disposal programmes. In view of environmental protection, radio and chemical characterisation is an essential step prior to waste treatment. This study focuses on developing of a protocol to recover (226)Ra and (210)Pb from scales produced by petroleum industry. X-ray diffractograms of the scales indicated the presence of barite-strontium (Ba0.75Sr0.25SO4) and hokutolite (Ba0.69Pb0.31SO4) as main minerals. Quartz, galena and Ca2Al2SiO6(OH)2 or sphalerite and iron oxide were found in minor quantities. Incineration to 600 °C followed by enclosed-digestion and acid-treatment gave complete digestion. Using (133)Ba and (210)Pb tracers as internal standards gave recovery ranged 87-91% for (226)Ra and ca. 100% for (210)Pb. Radium was finally dissolved in concentrated sulphuric acid, while (210)Pb dissolved in the former solution as well as in 8 M nitric acid. Dissolving the scales would provide better estimation of their radionuclides contents, facilitate the determination of their chemical composition, and make it possible to recycle NORM wastes in terms of radionuclides production. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  15. Optimal rendezvous in the neighborhood of a circular orbit

    NASA Technical Reports Server (NTRS)

    Jones, J. B.

    1976-01-01

    The minimum velocity-change rendezvous solutions, when the motion may be linearized about a circular orbit, fall into two separate regions; the phase-for-free region and the general region. Phase-for-free solutions are derived from the optimum transfer solutions, require the same velocity-change expenditure, but may not be unique. Analytic solutions are presented in two of the three subregions. An algorithm is presented for determining the unique solutions in the general region. Various sources of initial conditions are discussed and three examples are presented.

  16. The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2005-01-01

    The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.

  17. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  18. Social shaping of food intervention initiatives at worksites: canteen takeaway schemes at two Danish hospitals.

    PubMed

    Poulsen, Signe; Jørgensen, Michael Søgaard

    2011-09-01

    The aim of this article is to analyse the social shaping of worksite food interventions at two Danish worksites. The overall aims are to contribute first, to the theoretical frameworks for the planning and analysis of food and health interventions at worksites and second, to a foodscape approach to worksite food interventions. The article is based on a case study of the design of a canteen takeaway (CTA) scheme for employees at two Danish hospitals. This was carried out as part of a project to investigate the shaping and impact of schemes that offer employees meals to buy, to take home or to eat at the worksite during irregular working hours. Data collection was carried out through semi-structured interviews with stakeholders within the two change processes. Two focus group interviews were also carried out at one hospital and results from a user survey carried out by other researchers at the other hospital were included. Theoretically, the study was based on the social constitution approach to change processes at worksites and a co-evolution approach to problem-solution complexes as part of change processes. Both interventions were initiated because of the need to improve the food supply for the evening shift and the work-life balance. The shaping of the schemes at the two hospitals became rather different change processes due to the local organizational processes shaped by previously developed norms and values. At one hospital the change process challenged norms and values about food culture and challenged ideas in the canteen kitchen about working hours. At the other hospital, the change was more of a learning process that aimed at finding the best way to offer a CTA scheme. Worksite health promotion practitioners should be aware that the intervention itself is an object of negotiation between different stakeholders at a worksite based on existing norms and values. The social contextual model and the setting approach to worksite health interventions lack reflections about how such norms and values might influence the shaping of the intervention. It is recommended that future planning and analyses of worksite health promotion interventions apply a combination of the social constitution approach to worksites and an integrated food supply and demand perspective based on analyses of the co-evolution of problem-solution complexes.

  19. Benefits and risks of adopting the global code of practice for recreational fisheries

    USGS Publications Warehouse

    Arlinghaus, Robert; Beard, T. Douglas; Cooke, Steven J.; Cowx, Ian G.

    2012-01-01

    Recreational fishing constitutes the dominant or sole use of many fish stocks, particularly in freshwater ecosystems in Western industrialized countries. However, despite their social and economic importance, recreational fisheries are generally guided by local or regional norms and standards, with few comprehensive policy and development frameworks existing across jurisdictions. We argue that adoption of a recently developed Global Code of Practice (CoP) for Recreational Fisheries can provide benefits for moving recreational fisheries toward sustainability on a global scale. The CoP is a voluntary document, specifically framed toward recreational fisheries practices and issues, thereby complementing and extending the United Nation's Code of Conduct for Responsible Fisheries by the Food and Agricultural Organization. The CoP for Recreational Fisheries describes the minimum standards of environmentally friendly, ethically appropriate, and—depending on local situations—socially acceptable recreational fishing and its management. Although many, if not all, of the provisions presented in the CoP are already addressed through national fisheries legislation and state-based fisheries management regulations in North America, adopting a common framework for best practices in recreational fisheries across multiple jurisdictions would further promote their long-term viability in the face of interjurisdictional angler movements and some expanding threats to the activity related to shifting sociopolitical norms.

  20. Dynamic mechanism of equivalent conductivity minimum of electrolyte solution

    NASA Astrophysics Data System (ADS)

    Yamaguchi, T.; Matsuoka, T.; Koda, S.

    2011-10-01

    The theory on electric conductivity of electrolyte solutions we have developed [T. Yamaguchi, T. Matsuoka, and S. Koda, J. Chem. Phys. 127, 064508 (2007)] is applied to a model electrolyte solution that shows a minimum of equivalent conductivity as the function of concentration [T. Yamaguchi, T. Akatsuka, and S. Koda, J. Chem. Phys. 134, 244506 (2011)]. The theory succeeds in reproducing the equivalent conductivity minimum, whereas the mode-coupling theory (MCT) underestimates the conductivity in the low-concentration regime. The theory can also reproduce the decrease in the relaxation time of conductivity with increasing the concentration we have demonstrated with a Brownian dynamics simulation. A detailed analysis shows that the relaxation of the conductivity occurs through two processes. The faster one corresponds to the collision between a cation and an anion, and the slower one does to the polarization of the ionic atmosphere. The increase in the equivalent conductivity with concentration is attributed to the decrease in the effect of the ionic atmosphere, which is in turn explained by the fact that the counter ion cannot penetrate into the repulsive core when the Debye screening length is compatible or smaller than the ionic diameter. The same mechanism is also observed in MCT calculation with static structure factor determined by mean-spherical approximation.

  1. Photometric followup investigations on LAMOST survey target Ly And

    NASA Astrophysics Data System (ADS)

    Lu, Hong-peng; Zhang, Li-yun; Han, Xianming L.; Pi, Qing-feng; Wang, Dai-mei

    2017-02-01

    We present a low-dispersion spectrum and two sets of CCD photometric light curves of the eclipsing binary LY And for the first time. The spectrum of LY And was classified as G2. We derived an updated ephemeris based on all previously available and our newly acquired minimum light times. Our analyses of LY And light curve minimum times reveals that the differences between calculated and observed minimum times for LY And can be represented by an upward parabolic curve, which means its orbital period is increasing with a rate of 1.88 (± 0.13) × 10-7 days/year. This increase in orbital period may be interpreted as mass transfer from the primary component to the secondary component, with a rate of dM1/dt = -4.54 × 10-8M⊙/year. By analyzing our CCD photometric light curves obtained in 2015, we obtained its photometric solution with the Wilson-Devinney program. This photometric solution also fits very well our light curves obtained in 2014. Our photometric solution shows that LY And is a contact eclipsing binary and its contact factor is f = (17.8 ± 1.9)%. Furthermore, both our spectroscopic and photometric data show no obvious chromospheric activity of LY And.

  2. Treating Vomiting

    MedlinePlus

    ... those descibed below. Estimated Oral Fluid and Electrolyte Requirements by Body Weight Body Weight (in pounds) Minimum Daily Fluid Requirements (in ounces)* Electrolyte Solution Requirements for Mild Diarrhea ( ...

  3. A simple DVH generation technique for various radiotherapy treatment planning systems for an independent information system

    NASA Astrophysics Data System (ADS)

    Min, Byung Jun; Nam, Heerim; Jeong, Il Sun; Lee, Hyebin

    2015-07-01

    In recent years, the use of a picture archiving and communication system (PACS) for radiation therapy has become the norm in hospital environments and has been suggested for collecting and managing data using Digital Imaging and Communication in Medicine (DICOM) objects from different treatment planning systems (TPSs). However, some TPSs do not provide the ability to export the dose-volume histogram (DVH) in text or other format. In addition, plan review systems for various TPSs often allow DVH recalculations with different algorithms. These algorithms result in inevitable discrepancies between the values obtained with the recalculation and those obtained with TPS itself. The purpose of this study was to develop a simple method for generating reproducible DVH values by using the TPSs. Treatment planning information, including structures and delivered dose, was exported in the DICOM format from the Eclipse v8.9 or the Pinnacle v9.6 planning systems. The supersampling and trilinear interpolation methods were employed to calculate the DVH data from 35 treatment plans. The discrepancies between the DVHs extracted from each TPS and those extracted by using the proposed calculation method were evaluated with respect to the supersampling ratio. The volume, minimum dose, maximum dose, and mean dose were compared. The variations in DVHs from multiple TPSs were compared by using the MIM software v6.1, which is a commercially available treatment planning comparison tool. The overall comparisons of the volume, minimum dose, maximum dose, and mean dose showed that the proposed method generated relatively smaller discrepancies compared with TPS than the MIM software did compare with the TPS. As the structure volume decreased, the overall percent difference increased. The largest difference was observed in small organs such as the eye ball, eye lens, and optic nerve which had volume below 10 cc. A simple and useful technique was developed to generate a DVH with an acceptable error from a proprietary TPS. This study provides a convenient and common framework that will allow the use of a single well-managed storage solution for an independent information system.

  4. The Bloch Approximation in Periodically Perforated Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conca, C.; Gomez, D., E-mail: gomezdel@unican.es; Lobo, M.

    2005-06-15

    We consider a periodically heterogeneous and perforated medium filling an open domain {omega} of R{sup N}. Assuming that the size of the periodicity of the structure and of the holes is O({epsilon}),we study the asymptotic behavior, as {epsilon} {sup {yields}} 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in {omega}{sup {epsilon}}({omega}{sup {epsilon}} being {omega} minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and themore » first Bloch eigenfunction. We first consider the case where {omega}is R{sup N} and then localize the problem for abounded domain {omega}, considering a homogeneous Dirichlet condition on the boundary of {omega}.« less

  5. The research subject as wage earner.

    PubMed

    Anderson, James A; Weijer, Charles

    2002-01-01

    The practice of paying research subjects for participating in clinical trials has yet to receive an adequate moral analysis. Dickert and Grady argue for a wage payment model in which research subjects are paid an hourly wage based on that of unskilled laborers. If we accept this approach, what follows? Norms for just working conditions emerge from workplace legislation and political theory. All workers, including paid research subjects under Dickert and Grady's analysis, have a right to at least minimum wage, a standard work week, extra pay for overtime hours, a safe workplace, no fault compensation for work-related injury, and union organization. If we accept that paid research subjects are wage earners like any other, then the implications for changes to current practice are substantial.

  6. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  7. Global well-posedness and decay estimates of strong solutions to a two-phase model with magnetic field

    NASA Astrophysics Data System (ADS)

    Wen, Huanyao; Zhu, Limei

    2018-02-01

    In this paper, we consider the Cauchy problem for a two-phase model with magnetic field in three dimensions. The global existence and uniqueness of strong solution as well as the time decay estimates in H2 (R3) are obtained by introducing a new linearized system with respect to (nγ -n˜γ , n - n ˜ , P - P ˜ , u , H) for constants n ˜ ≥ 0 and P ˜ > 0, and doing some new a priori estimates in Sobolev Spaces to get the uniform upper bound of (n - n ˜ ,nγ -n˜γ) in H2 (R3) norm.

  8. Corrigendum to "Scattering above energy norm of solutions of a loglog energy-supercritical Schrödinger equation with radial data"

    NASA Astrophysics Data System (ADS)

    Roy, Tristan

    2018-05-01

    The purpose of this corrigendum is to point out some errors that appear in [1]. Our main result remains valid, i.e scattering of H˜k : =H˙k (Rn) ∩H˙1 (Rn) solutions of the loglog energy-supercritical Schrödinger equation i∂t u + △ u = | u | 4/n-2 ulogc ⁡ (log ⁡ (10 + | u|2), 0 < c n/2, radial data u (0) : =u0 ∈H˜k but with slightly different values of cn, i.e cn = 1/5772 if n = 3 and cn = 3/8024 if n = 4. We propose some corrections.

  9. High-Order Energy Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2009-01-01

    A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  10. Using 4th order Runge-Kutta method for solving a twisted Skyrme string equation

    NASA Astrophysics Data System (ADS)

    Hadi, Miftachul; Anderson, Malcolm; Husein, Andri

    2016-03-01

    We study numerical solution, especially using 4th order Runge-Kutta method, for solving a twisted Skyrme string equation. We find numerically that the value of minimum energy per unit length of vortex solution for a twisted Skyrmion string is 20.37 × 1060 eV/m.

  11. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  12. Rarity-weighted richness: a simple and reliable alternative to integer programming and heuristic algorithms for minimum set and maximum coverage problems in conservation planning.

    PubMed

    Albuquerque, Fabio; Beier, Paul

    2015-01-01

    Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.

  13. A Handful of Paragraphs on "Translation" and "Norms."

    ERIC Educational Resources Information Center

    Toury, Gideon

    1998-01-01

    Presents some thoughts on the issue of translation and norms, focusing on the relationships between social agreements, conventions, and norms; translational norms; acts of translation and translation events; norms and values; norms for translated texts versus norms for non-translated texts; and competing norms. Comments on the reactions to three…

  14. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  15. A norm knockout method on indirect reciprocity to reveal indispensable norms

    PubMed Central

    Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya

    2017-01-01

    Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out. PMID:28276485

  16. A norm knockout method on indirect reciprocity to reveal indispensable norms

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya

    2017-03-01

    Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.

  17. Addressing the minimum fleet problem in on-demand urban mobility.

    PubMed

    Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C

    2018-05-01

    Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .

  18. Resolving the 180-degree ambiguity in vector magnetic field measurements: The 'minimum' energy solution

    NASA Technical Reports Server (NTRS)

    Metcalf, Thomas R.

    1994-01-01

    I present a robust algorithm that resolves the 180-deg ambiguity in measurements of the solar vector magnetic field. The technique simultaneously minimizes both the divergence of the magnetic field and the electric current density using a simulated annealing algorithm. This results in the field orientation with approximately minimum free energy. The technique is well-founded physically and is simple to implement.

  19. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  20. Nonstatic radiating spheres in general relativity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krori, K.D.; Borgohain, P.; Sarma, R.

    1985-02-15

    The method of Herrera, Jimenez, and Ruggeri of obtaining nonstatic solutions of Einstein's field equations to study the evolution of stellar bodies is applied to obtain two models of nonstatic radiating spheres from two well-known static solutions of field equations, viz., Tolman's solutions IV and V. Whereas Tolman's type-IV model is found to be contracting for the period under investigation, Tolman's type-V model shows a bounce after attaining a minimum radius.

  1. SPECIFIC HEAT INDICATOR

    DOEpatents

    Horn, F.L.; Binns, J.E.

    1961-05-01

    Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.

  2. Stress corrosion behavior of Ru-enhanced alpha-beta titanium alloys in methanol solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schutz, R.W.; Horrigan, J.M.; Bednarowicz, T.A.

    1998-12-31

    Conservative, practical guidelines for the minimum water content required to prevent methanolic stress corrosion cracking (SCC) of Ti-6Al-4V-Ru and Ti-3Al-2.5V-Ru alloy tubulars have been developed from slow strain rate testing in plain and acidified NaCl-saturated methanol-water solutions at 25 C. A minimum methanol water content of 10 wt.% is proposed for Ti-6Al-4V-Ru, whereas 2-3 wt.% is sufficient for the lower strength Ti-3Al-2.5V-Ru alloy. Although HCl-acidification aggravated methanolic SCC, intermixing of methanol with crude oil or pure hydrocarbons, H{sub 2}S gas saturation, and/or increasing temperature diminished cracking susceptibility in these alloy tubulars.

  3. Limits to Open Class Performance?

    NASA Technical Reports Server (NTRS)

    Bowers, Albion H.

    2008-01-01

    This presentation discusses open or unlimited class aircraft performance limitations and design solutions. Limitations in this class of aircraft include slow climbing flight which requires low wing loading, high cruise speed which requires high wing loading, gains in induced or viscous drag alone which result in only half the gain overall and other structural problems (yaw inertia and spins, flutter and static loads integrity). Design solutions include introducing minimum induced drag for a given span (elliptical span load or winglets) and introducing minimum induced drag for a bell shaped span load. It is concluded that open class performance limits (under current rules and technologies) is very close to absolute limits, though some gains remain to be made from unexplored areas and new technologies.

  4. A proof of the DBRF-MEGN method, an algorithm for deducing minimum equivalent gene networks

    PubMed Central

    2011-01-01

    Background We previously developed the DBRF-MEGN (difference-based regulation finding-minimum equivalent gene network) method, which deduces the most parsimonious signed directed graphs (SDGs) consistent with expression profiles of single-gene deletion mutants. However, until the present study, we have not presented the details of the method's algorithm or a proof of the algorithm. Results We describe in detail the algorithm of the DBRF-MEGN method and prove that the algorithm deduces all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. Conclusions The DBRF-MEGN method provides all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. PMID:21699737

  5. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  6. Black holes in vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji

    We study static and spherically symmetric black hole (BH) solutions in second-order generalized Proca theories with nonminimal vector field derivative couplings to the Ricci scalar, the Einstein tensor, and the double dual Riemann tensor. We find concrete Lagrangians which give rise to exact BH solutions by imposing two conditions of the two identical metric components and the constant norm of the vector field. These exact solutions are described by either Reissner-Nordström (RN), stealth Schwarzschild, or extremal RN solutions with a non-trivial longitudinal mode of the vector field. We then numerically construct BH solutions without imposing these conditions. For cubic andmore » quartic Lagrangians with power-law couplings which encompass vector Galileons as the specific cases, we show the existence of BH solutions with the difference between two non-trivial metric components. The quintic-order power-law couplings do not give rise to non-trivial BH solutions regular throughout the horizon exterior. The sixth-order and intrinsic vector-mode couplings can lead to BH solutions with a secondary hair. For all the solutions, the vector field is regular at least at the future or past horizon. The deviation from General Relativity induced by the Proca hair can be potentially tested by future measurements of gravitational waves in the nonlinear regime of gravity.« less

  7. Understanding multiple levels of norms about teen pregnancy and their relationships to teens' sexual behaviors.

    PubMed

    Mollborn, Stefanie; Domingue, Benjamin W; Boardman, Jason D

    2014-06-01

    Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys' perceived norms, while peer network norms predict girls' perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens' likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys' contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors.

  8. Understanding multiple levels of norms about teen pregnancy and their relationships to teens’ sexual behaviors

    PubMed Central

    Mollborn, Stefanie; Domingue, Benjamin W.; Boardman, Jason D.

    2014-01-01

    Researchers seeking to understand teen sexual behaviors often turn to age norms, but they are difficult to measure quantitatively. Previous work has usually inferred norms from behavioral patterns or measured group-level norms at the individual level, ignoring multiple reference groups. Capitalizing on the multilevel design of the Add Health survey, we measure teen pregnancy norms perceived by teenagers, as well as average norms at the school and peer network levels. School norms predict boys’ perceived norms, while peer network norms predict girls’ perceived norms. Peer network and individually perceived norms against teen pregnancy independently and negatively predict teens’ likelihood of sexual intercourse. Perceived norms against pregnancy predict increased likelihood of contraception among sexually experienced girls, but sexually experienced boys’ contraceptive behavior is more complicated: When both the boy and his peers or school have stronger norms against teen pregnancy he is more likely to contracept, and in the absence of school or peer norms against pregnancy, boys who are embarrassed are less likely to contracept. We conclude that: (1) patterns of behavior cannot adequately operationalize teen pregnancy norms, (2) norms are not simply linked to behaviors through individual perceptions, and (3) norms at different levels can operate independently of each other, interactively, or in opposition. This evidence creates space for conceptualizations of agency, conflict, and change that can lead to progress in understanding age norms and sexual behaviors. PMID:25104920

  9. Norm-Aware Socio-Technical Systems

    NASA Astrophysics Data System (ADS)

    Savarimuthu, Bastin Tony Roy; Ghose, Aditya

    The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions

  10. Eigenbeam analysis of the diversity in bat biosonar beampatterns.

    PubMed

    Caspers, Philip; Müller, Rolf

    2015-03-01

    A quantitative analysis of the interspecific variability in bat biosonar beampatterns has been carried out on 267 numerical predictions of emission and reception beampatterns from 98 different species. Since these beampatterns did not share a common orientation, an alignment was necessary to analyze the variability in the shape of the patterns. To achieve this, beampatterns were aligned using a pairwise optimization framework based on a rotation-dependent cost function. The sum of the p-norms between beam-gain functions across frequency served as a figure of merit. For a representative subset of the data, it was found that all pairwise beampattern alignments resulted in a unique global minimum. This minimum was found to be contained in a subset of all possible beampattern rotations that could be predicted by the overall beam orientation. Following alignment, the beampatterns were decomposed into principal components. The average beampattern consisted of a symmetric, positionally static single lobe that narrows and became progressively asymmetric with increasing frequency. The first three "eigenbeams" controlled the beam width of the beampattern across frequency while higher rank eigenbeams account for symmetry and lobe motion. Reception and emission beampatterns could be distinguished (85% correct classification) based on the first 14 eigenbeams.

  11. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  12. Einstein Equations Under Polarized U (1) Symmetry in an Elliptic Gauge

    NASA Astrophysics Data System (ADS)

    Huneau, Cécile; Luk, Jonathan

    2018-06-01

    We prove local existence of solutions to the Einstein-null dust system under polarized U (1) symmetry in an elliptic gauge. Using in particular the previous work of the first author on the constraint equations, we show that one can identify freely prescribable data, solve the constraints equations, and construct a unique local in time solution in an elliptic gauge. Our main motivation for this work, in addition to merely constructing solutions in an elliptic gauge, is to provide a setup for our companion paper in which we study high frequency backreaction for the Einstein equations. In that work, the elliptic gauge we consider here plays a crucial role to handle high frequency terms in the equations. The main technical difficulty in the present paper, in view of the application in our companion paper, is that we need to build a framework consistent with the solution being high frequency, and therefore having large higher order norms. This difficulty is handled by exploiting a reductive structure in the system of equations.

  13. Asymptotics and numerics of a family of two-dimensional generalized surface quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Ohkitani, Koji

    2012-09-01

    We study the generalised 2D surface quasi-geostrophic (SQG) equation, where the active scalar is given by a fractional power α of Laplacian applied to the stream function. This includes the 2D SQG and Euler equations as special cases. Using Poincaré's successive approximation to higher α-derivatives of the active scalar, we derive a variational equation for describing perturbations in the generalized SQG equation. In particular, in the limit α → 0, an asymptotic equation is derived on a stretched time variable τ = αt, which unifies equations in the family near α = 0. The successive approximation is also discussed at the other extreme of the 2D Euler limit α = 2-0. Numerical experiments are presented for both limits. We consider whether the solution behaves in a more singular fashion, with more effective nonlinearity, when α is increased. Two competing effects are identified: the regularizing effect of a fractional inverse Laplacian (control by conservation) and cancellation by symmetry (nonlinearity depletion). Near α = 0 (complete depletion), the solution behaves in a more singular fashion as α increases. Near α = 2 (maximal control by conservation), the solution behave in a more singular fashion, as α decreases, suggesting that there may be some α in [0, 2] at which the solution behaves in the most singular manner. We also present some numerical results of the family for α = 0.5, 1, and 1.5. On the original time t, the H1 norm of θ generally grows more rapidly with increasing α. However, on the new time τ, this order is reversed. On the other hand, contour patterns for different α appear to be similar at fixed τ, even though the norms are markedly different in magnitude. Finally, point-vortex systems for the generalized SQG family are discussed to shed light on the above problems of time scale.

  14. High-Order Non-Reflecting Boundary Conditions for the Linearized Euler Equations

    DTIC Science & Technology

    2008-09-01

    rotational effect. Now this rotational effect can be simplified. The atmosphere is thin compared to the radius of the Earth . Furthermore, atmospheric flows...error norm of the discrete solution. Blayo and Debreu [13] considered a characteristic variable ap- proach to NRBCs in first-order systems for ocean and...Third Edition, John Wiley and Sons, New York, 1995. [77] Jensen, T., “Open Boundary Conditions in Stratified Ocean Models,” Journal of Marine Systems

  15. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  16. Boggle Logic Puzzles: Minimal Solutions

    ERIC Educational Resources Information Center

    Needleman, Jonathan

    2013-01-01

    Boggle logic puzzles are based on the popular word game Boggle played backwards. Given a list of words, the problem is to recreate the board. We explore these puzzles on a 3 x 3 board and find the minimum number of three-letter words needed to create a puzzle with a unique solution. We conclude with a series of open questions.

  17. 21 CFR 177.1637 - Poly(oxy-1,2-ethanediyloxycarbonyl-2,6-naphthalenediylcarbonyl) resins.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... per cubic centimeter. (2) Inherent viscosity. The finished food-contact article shall have a minimum inherent viscosity of 0.55 deciliter per gram in a solution of 0.1 gram of polymer in 100 milliliters of a 25/40/35 (weight/weight/weight) solution of p-chlorophenol/tetrachloroethane/phenol. The viscosity is...

  18. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  19. Ways of giving benefits in marriage: norm use, relationship satisfaction, and attachment-related variability.

    PubMed

    Clark, Margaret S; Lemay, Edward P; Graham, Steven M; Pataki, Sherri P; Finkel, Eli J

    2010-07-01

    Couples reported on bases for giving support and on relationship satisfaction just prior to and approximately 2 years into marriage. Overall, a need-based, noncontingent (communal) norm was seen as ideal and was followed, and greater use of this norm was linked to higher relationship satisfaction. An exchange norm was seen as not ideal and was followed significantly less frequently than was a communal norm; by 2 years into marriage, greater use of an exchange norm was linked with lower satisfaction. Insecure attachment predicted greater adherence to an exchange norm. Idealization of and adherence to a communal norm dropped slightly across time. As idealization of a communal norm and own use and partner use of a communal norm decreased, people high in avoidance increased their use of an exchange norm, whereas people low in avoidance decreased their use of an exchange norm. Anxious individuals evidenced tighter links between norm use and marital satisfaction relative to nonanxious individuals. Overall, a picture of people valuing a communal norm and striving toward adherence to a communal norm emerged, with secure individuals doing so with more success and equanimity across time than insecure individuals.

  20. Barriers and dispersal surfaces in minimum-time interception

    NASA Technical Reports Server (NTRS)

    Rajan, N.; Ardema, M. D.

    1982-01-01

    Minimum time interception of a target moving in a horizontal plane is analyzed as a one-player differential game. Dispersal points and points on the barrier are located for a class of pursuit evasion and interception problems. These points are determined by constructing cross sections of the isochrones and hence obtaining the barrier, dispersal, and control level surfaces. The game solution maps the controls as a function of the state within the capture region.

  1. Breast Cancer Stimulation of Osteolysis

    DTIC Science & Technology

    2000-09-01

    essential medium supplemented with 10% fetal bovine serum (Hyclone, Logan, UT) and antibiotic/antimycotic solution (Sigma, St. Louis, MO) in 5% CO2 at...grown to confluence in alpha-modified minimum essential medium (Sigma) supplemented with 10% fetal bovine serum (Hyclone) at 37°C, 5% CO 2. ST2 cells...phenol red -free minimum essential medium supplemented with 10% FBS, and 50 ng/ml ascorbic acid. 1, 25-dihydroxyvitamin D3 and dexamethasone were

  2. Differential pricing of new pharmaceuticals in lower income European countries.

    PubMed

    Kaló, Zoltán; Annemans, Lieven; Garrison, Louis P

    2013-12-01

    Pharmaceutical companies adjust the pricing strategy of innovative medicines to the imperatives of their major markets. The ability of payers to influence the ex-factory price of new drugs depends on country population size and income per capita, among other factors. Differential pricing based on Ramsey principles is a 'second-best' solution to correct the imperfections of the global market for innovative pharmaceuticals, and it is also consistent with standard norms of equity. This analysis summarizes the boundaries of differential pharmaceutical pricing for policymakers, payers and other stakeholders in lower-income countries, with special focus on Central-Eastern Europe, and describes the feasibility and implications of potential solutions to ensure lower pharmaceutical prices as compared to higher-income countries. European stakeholders, especially in Central-Eastern Europe and at the EU level, should understand the implications of increased transparency of pricing and should develop solutions to prevent the limited accessibility of new medicines in lower-income countries.

  3. Performance bounds for nonlinear systems with a nonlinear ℒ2-gain property

    NASA Astrophysics Data System (ADS)

    Zhang, Huan; Dower, Peter M.

    2012-09-01

    Nonlinear ℒ2-gain is a finite gain concept that generalises the notion of conventional (linear) finite ℒ2-gain to admit the application of ℒ2-gain analysis tools of a broader class of nonlinear systems. The computation of tight comparison function bounds for this nonlinear ℒ2-gain property is important in applications such as small gain design. This article presents an approximation framework for these comparison function bounds through the formulation and solution of an optimal control problem. Key to the solution of this problem is the lifting of an ℒ2-norm input constraint, which is facilitated via the introduction of an energy saturation operator. This admits the solution of the optimal control problem of interest via dynamic programming and associated numerical methods, leading to the computation of the proposed bounds. Two examples are presented to demonstrate this approach.

  4. Infection disclosure in the injecting dyads of Hungarian and Lithuanian injecting drug users who self-reported being infected with hepatitis C virus or human immunodeficiency virus.

    PubMed

    Gyarmathy, V Anna; Neaigus, Alan; Li, Nan; Ujhelyi, Eszter; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2011-01-01

    The aim of this study was to assess the prevalence and correlates of disclosure to network members of being hepatitis C virus (HCV)- or human immunodeficiency virus (HIV)-infected among injecting dyads of infected injection drug users (IDUs) in Budapest, Hungary and Vilnius, Lithuania,. Multivariate generalized estimating equations (GEE) were used to assess associations. Very strong infection disclosure norms exist in Hungary, and HCV disclosure was associated with using drugs and having sex within the dyad. Non-ethnic Russian IDUs in Lithuania were more likely to disclose HCV infection to non-Roma, emotionally close and HCV-infected network members, and to those with whom they shared cookers, filters, drug solutions or rinse water or got used syringes from, and if they had fewer non-IDU or IDU network members. Ethnic Russian Lithuanian IDUs were more likely to disclose HCV if they had higher disclosure attitude and knowledge scores, 'trusted' network members, and had lower non-injecting network density and higher injecting network density. HIV-infected Lithuanian IDUs were more likely to disclose to 'trusted' network members. Disclosure norms matched disclosure behaviour in Hungary, while disclosure in Lithuania to 'trusted' network members suggests possible stigmatization. Ongoing free and confidential HCV/HIV testing services for IDUs are needed to emphasize and strengthen disclosure norms, and to decrease stigma.

  5. The stabilizing role of the Sabbath in pre-monarchic Israel: a mathematical model.

    PubMed

    Livni, Joseph; Stone, Lewi

    2015-03-01

    The three monotheistic cultures have many common institutions and some of them germinated in pre-monarchic Israel. Reasonably, the essential institutions were in place at that starting point; this work explores the possibility that the Sabbath is one of these institutions. Our mathematical examination points to the potential cultural, civic, and social role of the weekly Sabbath, that is, the Sabbath institution, in controlling deviation from social norms. It begins with an analogy between spread of transgression (defined as lack of conformity with social norms) and of biological infection. Borrowing well-known mathematical methods, we derive solution sets of social equilibrium and study their social stability. The work shows how a weekly Sabbath could in theory enhance social resilience in comparison with a similar assembly with a more natural and longer period, say between New Moon and Full Moon. The examination reveals that an efficient Sabbath institution has the potential to ensure a stable organization and suppress occasional appearances of transgression from cultural norms and boundaries. The work suggests the existence of a sharp threshold governed by the "Basic Sabbath Number ש0"-a critical observance of the Sabbath, or large enough ש0, is required to ensure suppression of transgression. Subsequently, the model is used to explore an interesting question: how old is the Sabbath? The work is interdisciplinary, combining anthropological concepts with mathematical analysis and with archaeological parallels in regards to the findings.

  6. Concave 1-norm group selection

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2015-01-01

    Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206

  7. Norm - contaminated iodine production facilities decommissioning in Turkmenistan: experience and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelbutovskiy, Alexander; Cheremisin, Peter; Egorov, Alexander

    2013-07-01

    This report summarizes the data, including the cost parameters of the former iodine production facilities decommissioning project in Turkmenistan. Before the closure, these facilities were producing the iodine from the underground mineral water by the methods of charcoal adsorption. Balkanabat iodine and Khazar chemical plants' sites remediation, transportation and disposal campaigns main results could be seen. The rehabilitated area covers 47.5 thousand square meters. The remediation equipment main characteristics, technical solutions and rehabilitation operations performed are indicated also. The report shows the types of the waste shipping containers, the quantity and nature of the logistics operations. The project waste turnovermore » is about 2 million ton-kilometers. The problems encountered during the remediation of the Khazar chemical plant site are discussed: undetected waste quantities that were discovered during the operational activities required the additional volume of the disposal facility. The additional repository wall superstructure was designed and erected to accommodate this additional waste. There are data on the volume and characteristics of the NORM waste disposed: 60.4 thousand cu.m. of NORM with total activity 1 439 x 10{sup 9} Bq (38.89 Ci) were disposed at all. This report summarizes the project implementation results, from 2009 to 15.02.2012 (the date of the repository closure and its placement under the controlled supervision), including monitoring results within a year after the repository closure. (authors)« less

  8. Genotype by environment interaction and breeding for robustness in livestock

    PubMed Central

    Rauw, Wendy M.; Gomez-Raya, Luis

    2015-01-01

    The increasing size of the human population is projected to result in an increase in meat consumption. However, at the same time, the dominant position of meat as the center of meals is on the decline. Modern objections to the consumption of meat include public concerns with animal welfare in livestock production systems. Animal breeding practices have become part of the debate since it became recognized that animals in a population that have been selected for high production efficiency are more at risk for behavioral, physiological and immunological problems. As a solution, animal breeding practices need to include selection for robustness traits, which can be implemented through the use of reaction norms analysis, or though the direct inclusion of robustness traits in the breeding objective and in the selection index. This review gives an overview of genotype × environment interactions (the influence of the environment, reaction norms, phenotypic plasticity, canalization, and genetic homeostasis), reaction norms analysis in livestock production, options for selection for increased levels of production and against environmental sensitivity, and direct inclusion of robustness traits in the selection index. Ethical considerations of breeding for improved animal welfare are discussed. The discussion on animal breeding practices has been initiated and is very alive today. This positive trend is part of the sustainable food production movement that aims at feeding 9.15 billion people not just in the near future but also beyond. PMID:26539207

  9. Contact lens disinfecting solutions antibacterial efficacy: comparison between clinical isolates and the standard ISO ATCC strains of Pseudomonas aeruginosa and Staphylococcus aureus.

    PubMed

    Mohammadinia, M; Rahmani, S; Eslami, G; Ghassemi-Broumand, M; Aghazadh Amiri, M; Aghaie, Gh; Tabatabaee, S M; Taheri, S; Behgozin, A

    2012-02-01

    To evaluate the disinfectant properties of the three multipurpose contact lens disinfecting solutions available in Iran, against clinical isolates and the standard ISO ATCC strains of Pseudomonas aeruginosa and Staphylococcus aureus, based on the international organization for standardization (ISO) 14729 guidelines. Three multipurpose solutions that were tested were ReNu Multiplus, Solo Care Aqua and All-Clean Soft. The test solutions were challenged with clinical isolates and the standard strains of P. aeruginosa(ATCC 9027) and S. aureus(ATCC 6538), based on the ISO Stand-alone procedure for disinfecting products. Solutions were sampled for surviving microorganisms at manufacturer's minimum recommended disinfection time. The number of viable organisms was determined and log reductions calculated. All of the three test solutions in this study provided a reduction greater than the required mean 3.0 logarithmic reduction against the recommended standard ATCC strains of P. aeruginosa and S. aureus. Antibacterial effectiveness of Solo Care Aqua and All-Clean Soft against clinical isolates of P. aeruginosa and S. aureus were acceptable based on ISO 14729 Stand-alone test. ReNu MultiPlus showed a minimum acceptable efficacy against the clinical isolate of S. aureus, but did not reduce the clinical isolate by the same amount. Although the contact lens disinfecting solutions meet/exceed the ISO 14729 Stand-alone primary acceptance criteria for standard strains of P. aeruginosa and S. aureus, their efficacy may be insufficient against clinical isolates of these organisms.

  10. Contact lens disinfecting solutions antibacterial efficacy: comparison between clinical isolates and the standard ISO ATCC strains of Pseudomonas aeruginosa and Staphylococcus aureus

    PubMed Central

    Mohammadinia, M; Rahmani, S; Eslami, G; Ghassemi-Broumand, M; Aghazadh Amiri, M; Aghaie, Gh; Tabatabaee, S M; Taheri, S; Behgozin, A

    2012-01-01

    Purpose To evaluate the disinfectant properties of the three multipurpose contact lens disinfecting solutions available in Iran, against clinical isolates and the standard ISO ATCC strains of Pseudomonas aeruginosaand Staphylococcus aureus, based on the international organization for standardization (ISO) 14729 guidelines. Methods Three multipurpose solutions that were tested were ReNu Multiplus, Solo Care Aqua and All-Clean Soft. The test solutions were challenged with clinical isolates and the standard strains of P. aeruginosa(ATCC 9027) and S. aureus(ATCC 6538), based on the ISO Stand-alone procedure for disinfecting products. Solutions were sampled for surviving microorganisms at manufacturer's minimum recommended disinfection time. The number of viable organisms was determined and log reductions calculated. Results All of the three test solutions in this study provided a reduction greater than the required mean 3.0 logarithmic reduction against the recommended standard ATCC strains of P. aeruginosaand S. aureus. Antibacterial effectiveness of Solo Care Aqua and All-Clean Soft against clinical isolates of P. aeruginosaand S. aureuswere acceptable based on ISO 14729 Stand-alone test. ReNu MultiPlus showed a minimum acceptable efficacy against the clinical isolate of S. aureus, but did not reduce the clinical isolate by the same amount. Conclusions Although the contact lens disinfecting solutions meet/exceed the ISO 14729 Stand-alone primary acceptance criteria for standard strains of P. aeruginosaand S. aureus, their efficacy may be insufficient against clinical isolates of these organisms. PMID:22094301

  11. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.

  12. A coupled electro-thermal Discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Homsi, L.; Geuzaine, C.; Noels, L.

    2017-11-01

    This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.

  13. Optimal sparse approximation with integrate and fire neurons.

    PubMed

    Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher

    2014-08-01

    Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.

  14. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  15. Global dynamics of a delay differential equation with spatial non-locality in an unbounded domain

    NASA Astrophysics Data System (ADS)

    Yi, Taishan; Zou, Xingfu

    In this paper, we study the global dynamics of a class of differential equations with temporal delay and spatial non-locality in an unbounded domain. Adopting the compact open topology, we describe the delicate asymptotic properties of the nonlocal delayed effect and establish some a priori estimate for nontrivial solutions which enables us to show the permanence of the equation. Combining these results with a dynamical systems approach, we determine the global dynamics of the equation under appropriate conditions. Applying the main results to the model with Ricker's birth function and Mackey-Glass's hematopoiesis function, we obtain threshold results for the global dynamics of these two models. We explain why our results on the global attractivity of the positive equilibrium in C∖{0} under the compact open topology becomes invalid in C∖{0} with respect to the usual supremum norm, and we identify a subset of C∖{0} in which the positive equilibrium remains attractive with respect to the supremum norm.

  16. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  17. Solution of Volterra and Fredholm Classes of Equations via Triangular Orthogonal Function (A Combination of Right Hand Triangular Function and Left Hand Triangular Function) and Hybrid Orthogonal Function (A Combination of Sample Hold Function and Right Hand Triangular Function)

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep

    2018-04-01

    In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.

  18. Conflict and convention in dynamic networks.

    PubMed

    Foley, Michael; Forber, Patrick; Smead, Rory; Riedl, Christoph

    2018-03-01

    An important way to resolve games of conflict (snowdrift, hawk-dove, chicken) involves adopting a convention: a correlated equilibrium that avoids any conflict between aggressive strategies. Dynamic networks allow individuals to resolve conflict via their network connections rather than changing their strategy. Exploring how behavioural strategies coevolve with social networks reveals new dynamics that can help explain the origins and robustness of conventions. Here, we model the emergence of conventions as correlated equilibria in dynamic networks. Our results show that networks have the tendency to break the symmetry between the two conventional solutions in a strongly biased way. Rather than the correlated equilibrium associated with ownership norms (play aggressive at home, not away), we usually see the opposite host-guest norm (play aggressive away, not at home) evolve on dynamic networks, a phenomenon common to human interaction. We also show that learning to avoid conflict can produce realistic network structures in a way different than preferential attachment models. © 2017 The Author(s).

  19. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  20. Verification of Software: The Textbook and Real Problems

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2006-01-01

    The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.

  1. A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2008-01-01

    A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  2. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  3. Stability analysis of spectral methods for hyperbolic initial-boundary value systems

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Lustman, L.; Tadmor, E.

    1986-01-01

    A constant coefficient hyperbolic system in one space variable, with zero initial data is discussed. Dissipative boundary conditions are imposed at the two points x = + or - 1. This problem is discretized by a spectral approximation in space. Sufficient conditions under which the spectral numerical solution is stable are demonstrated - moreover, these conditions have to be checked only for scalar equations. The stability theorems take the form of explicit bounds for the norm of the solution in terms of the boundary data. The dependence of these bounds on N, the number of points in the domain (or equivalently the degree of the polynomials involved), is investigated for a class of standard spectral methods, including Chebyshev and Legendre collocations.

  4. Inverse eigenproblem for R-symmetric matrices and their approximation

    NASA Astrophysics Data System (ADS)

    Yuan, Yongxin

    2009-11-01

    Let be a nontrivial involution, i.e., R=R-1[not equal to]±In. We say that is R-symmetric if RGR=G. The set of all -symmetric matrices is denoted by . In this paper, we first give the solvability condition for the following inverse eigenproblem (IEP): given a set of vectors in and a set of complex numbers , find a matrix such that and are, respectively, the eigenvalues and eigenvectors of A. We then consider the following approximation problem: Given an n×n matrix , find such that , where is the solution set of IEP and ||[dot operator]|| is the Frobenius norm. We provide an explicit formula for the best approximation solution by means of the canonical correlation decomposition.

  5. Quantum dark soliton: Nonperturbative diffusion of phase and position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dziarmaga, J.

    2004-12-01

    The dark soliton solution of the Gross-Pitaevskii equation in one dimension has two parameters that do not change the energy of the solution: the global phase of the condensate wave function and the position of the soliton. These degeneracies appear in the Bogoliubov theory as Bogoliubov modes with zero frequencies and zero norms. These 'zero modes' cannot be quantized as the usual Bogoliubov quasiparticle harmonic oscillators. They must be treated in a nonperturbative way. In this paper I develop a nonperturbative theory of zero modes. This theory provides a nonperturbative description of quantum phase diffusion and quantum diffusion of solitonmore » position. An initially well localized wave packet for soliton position is predicted to disperse beyond the width of the soliton.« less

  6. A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2009-01-01

    A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  7. Quadratic RK shooting solution for a environmental parameter prediction boundary value problem

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis Th.; Tsitouras, Ch.

    2014-10-01

    Using tools of Information Geometry, the minimum distance between two elements of a statistical manifold is defined by the corresponding geodesic, e.g. the minimum length curve that connects them. Such a curve, where the probability distribution functions in the case of our meteorological data are two parameter Weibull distributions, satisfies a 2nd order Boundary Value (BV) system. We study the numerical treatment of the resulting special quadratic form system using Shooting method. We compare the solutions of the problem when we employ a classical Singly Diagonally Implicit Runge Kutta (SDIRK) 4(3) pair of methods and a quadratic SDIRK 5(3) pair . Both pairs have the same computational costs whereas the second one attains higher order as it is specially constructed for quadratic problems.

  8. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  9. Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems

    NASA Astrophysics Data System (ADS)

    Ataei, Mohammad; Enshaee, Ali

    2011-12-01

    In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.

  10. The venetian-blind effect: a preference for zero disparity or zero slant?

    PubMed Central

    Vlaskamp, Björn N. S.; Guan, Phillip; Banks, Martin S.

    2013-01-01

    When periodic stimuli such as vertical sinewave gratings are presented to the two eyes, the initial stage of disparity estimation yields multiple solutions at multiple depths. The solutions are all frontoparallel when the sinewaves have the same spatial frequency; they are all slanted when the sinewaves have quite different frequencies. Despite multiple solutions, humans perceive only one depth in each visual direction: a single frontoparallel plane when the frequencies are the same and a series of small slanted planes—Venetian blinds—when the frequencies are quite different. These percepts are consistent with a preference for solutions that minimize absolute disparity or overall slant. The preference for minimum disparity and minimum slant are identical for gaze at zero eccentricity; we dissociated the predictions of the two by measuring the occurrence of Venetian blinds when the stimuli were viewed in eccentric gaze. The results were generally quite consistent with a zero-disparity preference (Experiment 1), but we also observed a shift toward a zero-slant preference when the edges of the stimulus had zero slant (Experiment 2). These observations provide useful insights into how the visual system constructs depth percepts from a multitude of possible depths. PMID:24273523

  11. The venetian-blind effect: a preference for zero disparity or zero slant?

    PubMed

    Vlaskamp, Björn N S; Guan, Phillip; Banks, Martin S

    2013-01-01

    When periodic stimuli such as vertical sinewave gratings are presented to the two eyes, the initial stage of disparity estimation yields multiple solutions at multiple depths. The solutions are all frontoparallel when the sinewaves have the same spatial frequency; they are all slanted when the sinewaves have quite different frequencies. Despite multiple solutions, humans perceive only one depth in each visual direction: a single frontoparallel plane when the frequencies are the same and a series of small slanted planes-Venetian blinds-when the frequencies are quite different. These percepts are consistent with a preference for solutions that minimize absolute disparity or overall slant. The preference for minimum disparity and minimum slant are identical for gaze at zero eccentricity; we dissociated the predictions of the two by measuring the occurrence of Venetian blinds when the stimuli were viewed in eccentric gaze. The results were generally quite consistent with a zero-disparity preference (Experiment 1), but we also observed a shift toward a zero-slant preference when the edges of the stimulus had zero slant (Experiment 2). These observations provide useful insights into how the visual system constructs depth percepts from a multitude of possible depths.

  12. Using Peer Injunctive Norms to Predict Early Adolescent Cigarette Smoking Intentions

    PubMed Central

    Zaleski, Adam C.; Aloise-Young, Patricia A.

    2013-01-01

    The present study investigated the importance of the perceived injunctive norm to predict early adolescent cigarette smoking intentions. A total of 271 6th graders completed a survey that included perceived prevalence of friend smoking (descriptive norm), perceptions of friends’ disapproval of smoking (injunctive norm), and future smoking intentions. Participants also listed their five best friends, in which the actual injunctive norm was calculated. Results showed that smoking intentions were significantly correlated with the perceived injunctive norm but not with the actual injunctive norm. Secondly, the perceived injunctive norm predicted an additional 3.4% of variance in smoking intentions above and beyond the perceived descriptive norm. These results demonstrate the importance of the perceived injunctive norm in predicting early adolescent smoking intentions. PMID:24078745

  13. Effect of salt solutions on radiosensitivity of mammalian cells. I. Specific ion effects.

    PubMed

    Raaphorst, G P; Kruuv, J

    1977-07-01

    The radiation isodose survival curve of cells subjected to a wide concentration range of sucrose solutions has two maxima separated by a minimum. Both cations and anions can alter the cellular radiosensitivity above and beyond the osmotic effect observed for cells treated with sucrose solutions. The basic shape of the isodose curve can also be modulated by changes in temperature and solution exposure times. Some of these alterations in radiosensitivity may be related to changes in the amount and structure of cellular water or macromolecular conformation or to the direct effect of the ions, expecially at high solute concentrations.

  14. Solutions of the Helmholtz equation with boundary conditions for force-free magnetic fields

    NASA Technical Reports Server (NTRS)

    Rasband, S. N.; Turner, L.

    1981-01-01

    It is shown that the solution, with one ignorable coordinate, for the Taylor minimum energy state (resulting in a force-free magnetic field) in either a straight cylindrical or a toroidal geometry with arbitrary cross section can be reduced to the solution of either an inhomogeneous Helmholtz equation or a Grad-Shafranov equation with simple boundary conditions. Standard Green's function theory is, therefore, applicable. Detailed solutions are presented for the Taylor state in toroidal and cylindrical domains having a rectangular cross section. The focus is on solutions corresponding to the continuous eigenvalue spectra. Singular behavior at 90 deg corners is explored in detail.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cirilo Antonio, N.; Manojlovic, N.; Departamento de Matematica, FCT, Universidade do Algarve, Campus de Gambelas, 8005-139 Faro

    sl{sub 2} Gaudin model with jordanian twist is studied. This system can be obtained as the semiclassical limit of the XXX spin chain deformed by the jordanian twist. The appropriate creation operators that yield the Bethe states of the Gaudin model and consequently its spectrum are defined. Their commutation relations with the generators of the corresponding loop algebra as well as with the generating function of integrals of motion are given. The inner products and norms of Bethe states and the relation to the solutions of the Knizhnik-Zamolodchikov equations are discussed.

  16. Sparse Substring Pattern Set Discovery Using Linear Programming Boosting

    NASA Astrophysics Data System (ADS)

    Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki

    In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.

  17. An AMR capable finite element diffusion solver for ALE hydrocodes [An AMR capable diffusion solver for ALE-AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, A. C.; Bailey, D. S.; Kaiser, T. B.

    2015-02-01

    Here, we present a novel method for the solution of the diffusion equation on a composite AMR mesh. This approach is suitable for including diffusion based physics modules to hydrocodes that support ALE and AMR capabilities. To illustrate, we proffer our implementations of diffusion based radiation transport and heat conduction in a hydrocode called ALE-AMR. Numerical experiments conducted with the diffusion solver and associated physics packages yield 2nd order convergence in the L 2 norm.

  18. Concerning modeling of double-stage water evaporation cooling

    NASA Astrophysics Data System (ADS)

    Shatskiy, V. P.; Fedulova, L. I.; Gridneva, I. V.

    2018-03-01

    The matter of need for setting technical norms for production, as well as acceptable microclimate parameters, such as temperature and humidity, at the work place, remains urgent. Use of certain units should be economically sound and that should be taken into account for construction, assembly, operation, technological, and environmental requirements. Water evaporation coolers are simple to maintain, environmentally friendly, and quite cheap, but the development of the most efficient solutions requires mathematical modeling of the heat and mass transfer processes that take place in them.

  19. Initial-boundary layer associated with the nonlinear Darcy-Brinkman-Oberbeck-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Fei, Mingwen; Han, Daozhi; Wang, Xiaoming

    2017-01-01

    In this paper, we study the vanishing Darcy number limit of the nonlinear Darcy-Brinkman-Oberbeck-Boussinesq system (DBOB). This singular perturbation problem involves singular structures both in time and in space giving rise to initial layers, boundary layers and initial-boundary layers. We construct an approximate solution to the DBOB system by the method of multiple scale expansions. The convergence with optimal convergence rates in certain Sobolev norms is established rigorously via the energy method.

  20. On the Singular Incompressible Limit of Inviscid Compressible Fluids

    NASA Astrophysics Data System (ADS)

    Secchi, P.

    We consider the Euler equations of barotropic inviscid compressible fluids in a bounded domain. It is well known that, as the Mach number goes to zero, the compressible flows approximate the solution of the equations of motion of inviscid, incompressible fluids. In this paper we discuss, for the boundary case, the different kinds of convergence under various assumptions on the data, in particular the weak convergence in the case of uniformly bounded initial data and the strong convergence in the norm of the data space.

  1. [Integral quantitative evaluation of working conditions in the construction industry].

    PubMed

    Guseĭnov, A A

    1993-01-01

    Present method evaluating the quality of environment (using MAC and MAL) does not enable to assess completely and objectively the work conditions of building industry due to multiple confounding elements. A solution to this complicated problem including the analysis of various correlating elements of the system "human--work conditions--environment" may be encouraged by social norm of morbidity, which is independent on industrial and natural environment. The complete integral assessment enables to see the whole situation and reveal the points at risk.

  2. A Lower Bound for the Norm of the Solution of a Nonlinear Volterra Equation in One-Dimensional Viscoelasticity.

    DTIC Science & Technology

    1980-12-09

    34, Symp. on Non-well-posed Problems and Logarithmic Convexity (Lecture Notes on Math. #316), pp. 31-5h, Springer, 1973. 3. Greenberg , J.M., MacCamy, R.C...34Continuous Data Dependence for an Abstract Volterra Integro- Differential Equation in Hilbert Space with Applications to Viscoelasticity", Annali Scuola... Hilbert Space", to appear in the J. Applicable Analysis. 8. Slemrod, M., "Instability of Steady Shearing Flows in a Nonlinear Viscoelastic Fluid", Arch

  3. Social norms and their influence on eating behaviours.

    PubMed

    Higgs, Suzanne

    2015-03-01

    Social norms are implicit codes of conduct that provide a guide to appropriate action. There is ample evidence that social norms about eating have a powerful effect on both food choice and amounts consumed. This review explores the reasons why people follow social eating norms and the factors that moderate norm following. It is proposed that eating norms are followed because they provide information about safe foods and facilitate food sharing. Norms are a powerful influence on behaviour because following (or not following) norms is associated with social judgements. Norm following is more likely when there is uncertainty about what constitutes correct behaviour and when there is greater shared identity with the norm referent group. Social norms may affect food choice and intake by altering self-perceptions and/or by altering the sensory/hedonic evaluation of foods. The same neural systems that mediate the rewarding effects of food itself are likely to reinforce the following of eating norms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Analysis of Voltammetric Half-Wave Potentials in Low Ionic Strength Solutions and Voltammetric Measurement of Ion Impurity Concentrations

    DTIC Science & Technology

    1990-11-17

    voltammetric response. As will be developed in this paper , the ability to observe sigmoidally shaped voltammograms requires a minimum number of solution ions...polished with I 4im diamond paste (Buehler). Similar results ,vere obtained using both methods of electrode construction. Precise values of the electrode...impurities in the bulk of the solution that can serve as an electrolyte, Cimp * We will assume for simplicity that all ionic i f11urities are 1: 1

  5. Reference genes for reverse transcription quantitative PCR in canine brain tissue.

    PubMed

    Stassen, Quirine E M; Riemers, Frank M; Reijmerink, Hannah; Leegwater, Peter A J; Penning, Louis C

    2015-12-09

    In the last decade canine models have been used extensively to study genetic causes of neurological disorders such as epilepsy and Alzheimer's disease and unravel their pathophysiological pathways. Reverse transcription quantitative polymerase chain reaction is a sensitive and inexpensive method to study expression levels of genes involved in disease processes. Accurate normalisation with stably expressed so-called reference genes is crucial for reliable expression analysis. Following the minimum information for publication of quantitative real-time PCR experiments precise guidelines, the expression of ten frequently used reference genes, namely YWHAZ, HMBS, B2M, SDHA, GAPDH, HPRT, RPL13A, RPS5, RPS19 and GUSB was evaluated in seven brain regions (frontal lobe, parietal lobe, occipital lobe, temporal lobe, thalamus, hippocampus and cerebellum) and whole brain of healthy dogs. The stability of expression varied between different brain areas. Using the GeNorm and Normfinder software HMBS, GAPDH and HPRT were the most reliable reference genes for whole brain. Furthermore based on GeNorm calculations it was concluded that as little as two to three reference genes are sufficient to obtain reliable normalisation, irrespective the brain area. Our results amend/extend the limited previously published data on canine brain reference genes. Despite the excellent expression stability of HMBS, GAPDH and HRPT, the evaluation of expression stability of reference genes must be a standard and integral part of experimental design and subsequent data analysis.

  6. Robot map building based on fuzzy-extending DSmT

    NASA Astrophysics Data System (ADS)

    Li, Xinde; Huang, Xinhan; Wu, Zuyu; Peng, Gang; Wang, Min; Xiong, Youlun

    2007-11-01

    With the extensive application of mobile robots in many different fields, map building in unknown environments has been one of the principal issues in the field of intelligent mobile robot. However, Information acquired in map building presents characteristics of uncertainty, imprecision and even high conflict, especially in the course of building grid map using sonar sensors. In this paper, we extended DSmT with Fuzzy theory by considering the different fuzzy T-norm operators (such as Algebraic Product operator, Bounded Product operator, Einstein Product operator and Default minimum operator), in order to develop a more general and flexible combinational rule for more extensive application. At the same time, we apply fuzzy-extended DSmT to mobile robot map building with the help of new self-localization method based on neighboring field appearance matching( -NFAM), to make the new tool more robust in very complex environment. An experiment is conducted to reconstruct the map with the new tool in indoor environment, in order to compare their performances in map building with four T-norm operators, when Pioneer II mobile robot runs along the same trace. Finally, a conclusion is reached that this study develops a new idea to extend DSmT, also provides a new approach for autonomous navigation of mobile robot, and provides a human-computer interactive interface to manage and manipulate the robot remotely.

  7. Muslim women's narratives about bodily change and care during critical illness: a qualitative study.

    PubMed

    Zeilani, Ruqayya; Seymour, Jane E

    2012-03-01

    To explore experiences of Jordanian Muslim women in relation to bodily change during critical illness. A longitudinal narrative approach was used. A purposive sample of 16 Jordanian women who had spent a minimum of 48 hr in intensive care participated in one to three interviews over a 6-month period. Three main categories emerged from the analysis: the dependent body reflects changes in the women's bodily strength and performance, as they moved from being care providers into those in need of care; this was associated with experiences of a sense of paralysis, shame, and burden. The social body reflects the essential contribution that family help or nurses' support (as a proxy for family) made to women's adjustment to bodily change and their ability to make sense of their illness. The cultural body reflects the effect of cultural norms and Islamic beliefs on the women's interpretation of their experiences and relates to the women's understandings of bodily modesty. This study illustrates, by in-depth focus on Muslim women's narratives, the complex interrelationship between religious beliefs, cultural norms, and the experiences and meanings of bodily changes during critical illness. This article provides insights into vital aspects of Muslim women's needs and preferences for nursing care. It highlights the importance of including an assessment of culture and spiritual aspects when nursing critically ill patients. © 2011 Sigma Theta Tau International.

  8. Inversion of Magnetic Measurements of the CHAMP Satellite Over the Pannonian Basin

    NASA Technical Reports Server (NTRS)

    Kis, K. I.; Taylor, P. T.; Wittmann, G.; Toronyi, B.; Puszta, S.

    2011-01-01

    The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5 x 0.5, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude were downward continued to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We INTERPRET THAT the magnetic anomaly WAS produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.

  9. Effects of vocal training on singing and speaking voice characteristics in vocally healthy adults and children based on choral and nonchoral data.

    PubMed

    Siupsinskiene, Nora; Lycke, Hugo

    2011-07-01

    This prospective cross-sectional study examines the effects of voice training on vocal capabilities in vocally healthy age and gender differentiated groups measured by voice range profile (VRP) and speech range profile (SRP). Frequency and intensity measurements of the VRP and SRP using standard singing and speaking voice protocols were derived from 161 trained choir singers (21 males, 59 females, and 81 prepubescent children) and from 188 nonsingers (38 males, 89 females, and 61 children). When compared with nonsingers, both genders of trained adult and child singers exhibited increased mean pitch range, highest frequency, and VRP area in high frequencies (P<0.05). Female singers and child singers also showed significantly increased mean maximum voice intensity, intensity range, and total VRP area. The logistic regression analysis showed that VRP pitch range, highest frequency, maximum voice intensity, and maximum-minimum intensity range, and SRP slope of speaking curve were the key predictors of voice training. Age, gender, and voice training differentiated norms of VRP and SRP parameters are presented. Significant positive effect of voice training on vocal capabilities, mostly singing voice, was confirmed. The presented norms for trained singers, with key parameters differentiated by gender and age, are suggested for clinical practice of otolaryngologists and speech-language pathologists. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  10. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    PubMed

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  11. Minimum deltaV Burn Planning for the International Space Station Using a Hybrid Optimization Technique, Level 1

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2015-01-01

    The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic algorithm in Level 2 to select the number of burns and their TIGs for the next generation. In this manner, the two levels solve their respective sub-problems separately but collaboratively until a burn solution is found that globally minimizes the deltaV across the entire trajectory. Feasible solutions can also be found by simply using the SQP algorithm in Level 1 with a zero cost function. This paper discusses the formulation of the Level 1 sub-problem and the development of a prototype software tool to solve it. The Level 2 sub-problem will be discussed in a future work. Following the Level 1 formulation and solution, several look-ahead trajectory examples for the ISS are explored. In each case, the burn targeting results using the current process are compared against a feasible solution found using Level 1 in the proposed technique. Level 1 is then used to find a minimum deltaV solution given the fixed number of burns and burn TIGs. The optimal solution is compared with the previously found feasible solution to determine the deltaV (and therefore propellant) savings. The proposed technique seeks to both improve the current process for targeting ISS burns, and to add the capability to optimize ISS burns in a novel fashion. The optimal solutions found using this technique can potentially save hundreds of kilograms of propellant over the course of the ISS mission compared to feasible solutions alone. While the software tool being developed to implement this technique is specific to ISS, the concept is extensible to other long-duration, central-body orbiting missions that must perform orbit maintenance burns to meet operational trajectory constraints.

  12. Minimum weight passive insulation requirements for hypersonic cruise vehicles.

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1972-01-01

    Analytical solutions are derived for two representative cases of the transient heat conduction equation to determine the minimum weight requirements for passive insulation systems of hypersonic cruise vehicles. The cases discussed are the wet wall case with the interior wall temperature held to that of the boiling point of the fuel throughout the flight, and the dry wall case where the heat transferred through the insulation is absorbed by the interior structure whose temperature is allowed to rise.

  13. Barriers and dispersal surfaces in minimum-time interception. [for optimizing aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Rajan, N.; Ardema, M. D.

    1984-01-01

    A method is proposed for mapping the barrier, dispersal, and control-level surfaces for a class of minimum-time interception and pursuit-evasion problems. Minimum-time interception of a target moving in a horizontal plane is formulated in a coordinate system whose origin is at the interceptor's terminal position and whose x-axis is along the terminal line of sight. This approach makes it possible to discuss the nature of the interceptor's extremals, using its extremal trajectory maps (ETMs), independently of target motion. The game surfaces are constructed by drawing sections of the isochrones, or constant minimum-time loci, from the interceptor and target ETMs. In this way, feedback solutions for the optimal controls are obtained. An example involving the interception of a target moving in a straight line at constant speed is presented.

  14. Transverse Stress Decay in a Specially Orthotropic Strip Under Localizing Normal Edge Loading

    NASA Technical Reports Server (NTRS)

    Fichter, W. B.

    2000-01-01

    Solutions are presented for the stresses in a specially orthotropic infinite strip which is subjected to localized uniform normal loading on one edge while the other edge is either restrained against normal displacement only, or completely fixed. The solutions are used to investigate the diffusion of load into the strip and in particular the decay of normal stress across the width of the strip. For orthotropic strips representative of a broad range of balanced and symmetric angle-ply composite laminates, minimum strip widths are found that ensure at least 90% decay of the normal stress across the strip. In addition, in a few cases where, on the fixed edge the peak shear stress exceeds the normal stress in magnitude, minimum strip widths that ensure 90% decay of both stresses are found. To help in putting these results into perspective, and to illustrate the influence of material properties on load 9 orthotropic materials, closed-form solutions for the stresses in similarly loaded orthotropic half-planes are obtained. These solutions are used to generate illustrative stress contour plots for several representative laminates. Among the laminates, those composed of intermediate-angle plies, i.e., from about 30 degrees to 60 degrees, exhibit marked changes in normal stress contour shape with stress level. The stress contours are also used to find 90% decay distances in the half-planes. In all cases, the minimum strip widths for 90% decay of the normal stress exceed the 90% decay distances in the corresponding half-planes, in amounts ranging from only a few percent to about 50% of the half-plane decay distances. The 90% decay distances depend on both material properties and the boundary conditions on the supported edge.

  15. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  16. Exact solutions of the Wheeler–DeWitt equation and the Yamabe construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ita III, Eyo Eyo, E-mail: ita@usna.edu; Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw

    Exact solutions of the Wheeler–DeWitt equation of the full theory of four dimensional gravity of Lorentzian signature are obtained. They are characterized by Schrödinger wavefunctionals having support on 3-metrics of constant spatial scalar curvature, and thus contain two full physical field degrees of freedom in accordance with the Yamabe construction. These solutions are moreover Gaussians of minimum uncertainty and they are naturally associated with a rigged Hilbert space. In addition, in the limit the regulator is removed, exact 3-dimensional diffeomorphism and local gauge invariance of the solutions are recovered.

  17. Graph-Based Norm Explanation

    NASA Astrophysics Data System (ADS)

    Croitoru, Madalina; Oren, Nir; Miles, Simon; Luck, Michael

    Norms impose obligations, permissions and prohibitions on individual agents operating as part of an organisation. Typically, the purpose of such norms is to ensure that an organisation acts in some socially (or mutually) beneficial manner, possibly at the expense of individual agent utility. In this context, agents are normaware if they are able to reason about which norms are applicable to them, and to decide whether to comply with or ignore them. While much work has focused on the creation of norm-aware agents, much less has been concerned with aiding system designers in understanding the effects of norms on a system. The ability to understand such norm effects can aid the designer in avoiding incorrect norm specification, eliminating redundant norms and reducing normative conflict. In this paper, we address the problem of norm understanding by providing explanations as to why a norm is applicable, violated, or in some other state. We make use of conceptual graph based semantics to provide a graphical representation of the norms within a system. Given knowledge of the current and historical state of the system, such a representation allows for explanation of the state of norms, showing for example why they may have been activated or violated.

  18. Identification of Correlated GRACE Monthly Harmonic Coefficients Using Pattern Recognition and Neural Networks

    NASA Astrophysics Data System (ADS)

    Piretzidis, D.; Sra, G.; Sideris, M. G.

    2016-12-01

    This study explores new methods for identifying correlation errors in harmonic coefficients derived from monthly solutions of the Gravity Recovery and Climate Experiment (GRACE) satellite mission using pattern recognition and neural network algorithms. These correlation errors are evidenced in the differences between monthly solutions and can be suppressed using a de-correlation filter. In all studies so far, the implementation of the de-correlation filter starts from a specific minimum order (i.e., 11 for RL04 and 38 for RL05) until the maximum order of the monthly solution examined. This implementation method has two disadvantages, namely, the omission of filtering correlated coefficients of order less than the minimum order and the filtering of uncorrelated coefficients of order higher than the minimum order. In the first case, the filtered solution is not completely free of correlated errors, whereas the second case results in a monthly solution that suffers from loss of geophysical signal. In the present study, a new method of implementing the de-correlation filter is suggested, by identifying and filtering only the coefficients that show indications of high correlation. Several numerical and geometric properties of the harmonic coefficient series of all orders are examined. Extreme cases of both correlated and uncorrelated coefficients are selected, and their corresponding properties are used to train a two-layer feed-forward neural network. The objective of the neural network is to identify and quantify the correlation by providing the probability of an order of coefficients to be correlated. Results show good performance of the neural network, both in the validation stage of the training procedure and in the subsequent use of the trained network to classify independent coefficients. The neural network is also capable of identifying correlated coefficients even when a small number of training samples and neurons are used (e.g.,100 and 10, respectively).

  19. Associations of Perceived Norms With Intentions to Learn Genomic Sequencing Results: Roles for Attitudes and Ambivalence

    PubMed Central

    Reid, Allecia E.; Taber, Jennifer M.; Ferrer, Rebecca A.; Biesecker, Barbara B.; Lewis, Katie L.; Biesecker, Leslie G.; Klein, William M. P.

    2018-01-01

    Objective Genomic sequencing is becoming increasingly accessible, highlighting the need to understand the social and psychological factors that drive interest in receiving testing results. These decisions may depend on perceived descriptive norms (how most others behave) and injunctive norms (what is approved of by others). We predicted that descriptive norms would be directly associated with intentions to learn genomic sequencing results, whereas injunctive norms would be associated indirectly, via attitudes. These differential associations with intentions versus attitudes were hypothesized to be strongest when individuals held ambivalent attitudes toward obtaining results. Methods Participants enrolled in a genomic sequencing trial (n=372) reported intentions to learn medically actionable, non-medically actionable, and carrier sequencing results. Descriptive norms items referenced other study participants. Injunctive norms were analyzed separately for close friends and family members. Attitudes, attitudinal ambivalence, and sociodemographic covariates were also assessed. Results In structural equation models, both descriptive norms and friend injunctive norms were associated with intentions to receive all sequencing results (ps<.004). Attitudes consistently mediated all friend injunctive norms-intentions associations, but not the descriptive norms-intentions associations. Attitudinal ambivalence moderated the association between friend injunctive norms (p≤.001), but not descriptive norms (p=.16), and attitudes. Injunctive norms were significantly associated with attitudes when ambivalence was high, but were unrelated when ambivalence was low. Results replicated for family injunctive norms. Conclusions Descriptive and injunctive norms play roles in genomic sequencing decisions. Considering mediators and moderators of these processes enhances ability to optimize use of normative information to support informed decision making. PMID:29745680

  20. A new design approach to achieve a minimum impulse limit cycle in the presence of significant measurement uncertainties

    NASA Technical Reports Server (NTRS)

    Martin, M. W.; Kubiak, E. T.

    1982-01-01

    A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.

  1. The Moderating Role of Close versus Distal Peer Injunctive Norms and Interdependent Self-Construal in the Effects of Descriptive Norms on College Drinking.

    PubMed

    Yang, Bo

    2018-06-01

    Based on the theory of normative social behavior (Rimal & Real, 2005), this study examined the effects of descriptive norms, close versus distal peer injunctive norms, and interdependent self-construal on college students' intentions to consume alcohol. Results of a cross-sectional study conducted among U.S. college students (N = 581) found that descriptive norms, close, and distal peer injunctive norms had independent effects on college students' intentions to consume alcohol. Furthermore, close peer injunctive norms moderated the effects of descriptive norms on college students' intentions to consume alcohol and the interaction showed different patterns among students with a strong and weak interdependent self-construal. High levels of close peer injunctive norms weakened the relationship between descriptive norms and intentions to consume alcohol among students with a strong interdependent self-construal but strengthened the relationship between descriptive norms and intentions to consume alcohol among students with a weak interdependent self-construal. Implications of the findings for norms-based research and college drinking interventions are discussed.

  2. Reconstructing the duty of water: a study of emergent norms in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Wescoat, J. L., Jr.

    2013-12-01

    This paper assesses the changing norms of water use known as the duty of water. It is a case study in historical socio-hydrology, or more precisely the history of socio-hydrologic ideas, a line of research that is useful for interpreting and anticipating changing social values with respect to water. The duty of water is currently defined as the amount of water reasonably required to irrigate a substantial crop with careful management and without waste on a given tract of land. The historical section of the paper traces this concept back to late 18th century analysis of steam engine efficiencies for mine dewatering in Britain. A half-century later, British irrigation engineers fundamentally altered the concept of duty to plan large-scale canal irrigation systems in northern India at an average duty of 218 acres per cubic foot per second (cfs). They justified this extensive irrigation standard (i.e., low water application rate over large areas) with a suite of social values that linked famine prevention with revenue generation and territorial control. The duty of water concept in this context articulated a form of political power, as did related irrigation engineering concepts such as "command" and "regime". Several decades later irrigation engineers in the western US adapted the duty of water concept to a different socio-hydrologic system and norms, using it to establish minimum standards for private water rights appropriation (e.g., only 40 to 80 acres per cfs). While both concepts of duty addressed socio-economic values associated with irrigation, the western US linked duty with justifications for, and limits of, water ownership. The final sections show that while the duty of water concept has been eclipsed in practice by other measures, standards, and values of water use efficiency, it has continuing relevance for examining ethical duties and for anticipating, if not predicting, emerging social values with respect to water.

  3. Alcohol evaluations and acceptability: Examining descriptive and injunctive norms among heavy drinkers

    PubMed Central

    Foster, Dawn W.; Neighbors, Clayton; Krieger, Heather

    2015-01-01

    Objectives This study assessed descriptive and injunctive norms, evaluations of alcohol consequences, and acceptability of drinking. Methods Participants were 248 heavy-drinking undergraduates (81.05% female; Mage = 23.45). Results Stronger perceptions of descriptive and injunctive norms for drinking and more positive evaluations of alcohol consequences were positively associated with drinking and the number of drinks considered acceptable. Descriptive and injunctive norms interacted, indicating that injunctive norms were linked with number of acceptable drinks among those with higher descriptive norms. Descriptive norms and evaluations of consequences interacted, indicating that descriptive norms were positively linked with number of acceptable drinks among those with negative evaluations of consequences; however, among those with positive evaluations of consequences, descriptive norms were negatively associated with number of acceptable drinks. Injunctive norms and evaluations of consequences interacted, indicating that injunctive norms were positively associated with number of acceptable drinks, particularly among those with positive evaluations of consequences. A three-way interaction emerged between injunctive and descriptive norms and evaluations of consequences, suggesting that injunctive norms and the number of acceptable drinks were positively associated more strongly among those with negative versus positive evaluations of consequences. Those with higher acceptable drinks also had positive evaluations of consequences and were high in injunctive norms. Conclusions Findings supported hypotheses that norms and evaluations of alcohol consequences would interact with respect to drinking and acceptance of drinking. These examinations have practical utility and may inform development and implementation of interventions and programs targeting alcohol misuse among heavy drinking undergraduates. PMID:25437265

  4. Extending the Mertonian Norms: Scientists' Subscription to Norms of Research

    ERIC Educational Resources Information Center

    Anderson, Melissa S.; Ronning, Emily A.; De Vries, Raymond; Martinson, Brian C.

    2010-01-01

    This analysis, based on focus groups and a national survey, assesses scientists' subscription to the Mertonian norms of science and associated counternorms. It also supports extension of these norms to governance (as opposed to administration), as a norm of decision-making, and quality (as opposed to quantity), as an evaluative norm. (Contains 1…

  5. Application of artificial intelligence to impulsive orbital transfers

    NASA Technical Reports Server (NTRS)

    Burns, Rowland E.

    1987-01-01

    A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.

  6. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Protecting the fair trial rights of mentally disordered defendants in criminal proceedings: Exploring the need for further EU action.

    PubMed

    Verbeke, Peter; Vermeulen, Gert; Meysman, Michaël; Vander Beken, Tom

    2015-01-01

    Using the new legal basis provided by the Lisbon Treaty, the Council of the European Union has endorsed the 2009 Procedural Roadmap for strengthening the procedural rights of suspected or accused persons in criminal proceedings. This Roadmap has so far resulted in six measures from which specific procedural minimum standards have been and will be adopted or negotiated. So far, only Measure E directly touches on the specific issue of vulnerable persons. This Measure has recently produced a tentative result through a Commission Recommendation on procedural safeguards for vulnerable persons in criminal proceedings. This contribution aims to discuss the need for the introduction of binding minimum standards throughout Europe to provide additional protection for mentally disordered defendants. The paper will examine whether or not the member states adhere to existing fundamental norms and standards in this context, and whether the application of these norms and standards should be made more uniform. For this purpose, the procedural situation of mentally disordered defendants in Belgium and England and Wales will be thoroughly explored. The research establishes that Belgian law is unsatisfactory in the light of the Strasbourg case law, and that the situation in practice in England and Wales indicates not only that there is justifiable doubt about whether fundamental principles are always adhered to, but also that these principles should become more anchored in everyday practice. It will therefore be argued that there is a need for putting Measure E into practice. The Commission Recommendation, though only suggestive, may serve as a necessary and inspirational vehicle to improve the procedural rights of mentally disordered defendants and to ensure that member states are able to cooperate within the mutual recognition framework without being challenged on the grounds that they are collaborating with peers who do not respect defendants' fundamental fair trial rights. Throughout this contribution the term 'defendant' will be used, and no difference will be made in terminology between suspected and accused persons. This contribution only covers the situation of mentally disordered adult defendants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  9. Melting relations in the system FeCO3-MgCO3 and thermodynamic modelling of Fe-Mg carbonate melts

    NASA Astrophysics Data System (ADS)

    Kang, Nathan; Schmidt, Max W.; Poli, Stefano; Connolly, James A. D.; Franzolin, Ettore

    2016-09-01

    To constrain the thermodynamics and melting relations of the siderite-magnesite (FeCO3-MgCO3) system, 27 piston cylinder experiments were conducted at 3.5 GPa and 1170-1575 °C. Fe-rich compositions were also investigated with 13 multi-anvil experiments at 10, 13.6 and 20 GPa, 1500-1890 °C. At 3.5 GPa, the solid solution siderite-magnesite coexists with melt over a compositional range of X Mg (=Mg/(Mg + Fetot)) = 0.38-1.0, while at ≥10 GPa solid solution appears to be complete. At 3.5 GPa, the system is pseudo-binary because of the limited stability of siderite or liquid FeCO3, Fe-rich carbonates decomposing at subsolidus conditions to magnetite-magnesioferrite solid solution, graphite and CO2. Similar reactions also occur with liquid FeCO3 resulting in melt species with ferric iron components, but the decomposition of the liquid decreases in importance with pressure. At 3.5 GPa, the metastable melting temperature of pure siderite is located at 1264 °C, whereas pure magnesite melts at 1629 °C. The melting loop is non-ideal on the Fe side where the dissociation reaction resulting in Fe3+ in the melt depresses melting temperatures and causes a minimum. Over the pressure range of 3.5-20 GPa, this minimum is 20-35 °C lower than the (metastable) siderite melting temperature. By merging all present and previous experimental data, standard state (298.15 K, 1 bar) thermodynamic properties of the magnesite melt (MgCO3L) end member are calculated and the properties of (Fe,Mg)CO3 melt fit by a regular solution model with an interaction parameter of -7600 J/mol. The solution model reproduces the asymmetric melting loop and predicts the thermal minimum at 1240 °C near the siderite side at X Mg = 0.2 (3.5 GPa). The solution model is applicable to pressures reaching to the bottom of the upper mantle and allows calculation of phase relations in the FeO-MgO-O2-C system.

  10. Children are sensitive to norms of giving.

    PubMed

    McAuliffe, Katherine; Raihani, Nichola J; Dunham, Yarrow

    2017-10-01

    People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were told how much other children give (descriptive norm) or what they should give according to an adult (injunctive norm). Results showed that children generally gave more when they were exposed to a generous norm. However, patterns of compliance varied with age. Younger children were more likely to comply with the selfish norm, suggesting a licensing effect. By contrast, older children were more influenced by the generous norm, yet capped their donations at 50%, perhaps adhering to a pre-existing norm of equality. Children were not differentially influenced by descriptive or injunctive norms, suggesting a primacy of norm content over norm format. Together, our findings indicate that while generosity is malleable in children, normative information does not completely override pre-existing biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. U S Navy Diving Manual. Volume 2. Mixed-Gas Diving. Revision 1.

    DTIC Science & Technology

    1981-07-01

    has been soaked in a solution of portant aspects of underwater physics and physiology caustic potash. This chemical absorbed the carbon as they...between the diver’s breathing passages and the circuit must be of minimum volume minimum of caustic fumes. Water produced by the to preclude deadspace and...strongly react with water to pro- space around the absorbent bed to reduce the gas duce caustic fumes and cannot be used in UBA’s. flow distance. The

  12. The nucleolus is well-posed

    NASA Astrophysics Data System (ADS)

    Fragnelli, Vito; Patrone, Fioravante; Torre, Anna

    2006-02-01

    The lexicographic order is not representable by a real-valued function, contrary to many other orders or preorders. So, standard tools and results for well-posed minimum problems cannot be used. We prove that under suitable hypotheses it is however possible to guarantee the well-posedness of a lexicographic minimum over a compact or convex set. This result allows us to prove that some game theoretical solution concepts, based on lexicographic order are well-posed: in particular, this is true for the nucleolus.

  13. Research in computational fluid dynamics and analysis of algorithms

    NASA Technical Reports Server (NTRS)

    Gottlieb, David

    1992-01-01

    Recently, higher-order compact schemes have seen increasing use in the DNS (Direct Numerical Simulations) of the Navier-Stokes equations. Although they do not have the spatial resolution of spectral methods, they offer significant increases in accuracy over conventional second order methods. They can be used on any smooth grid, and do not have an overly restrictive CFL dependence as compared with the O(N(exp -2)) CFL dependence observed in Chebyshev spectral methods on finite domains. In addition, they are generally more robust and less costly than spectral methods. The issue of the relative cost of higher-order schemes (accuracy weighted against physical and numerical cost) is a far more complex issue, depending ultimately on what features of the solution are sought and how accurately they must be resolved. In any event, the further development of the underlying stability theory of these schemes is important. The approach of devising suitable boundary clusters and then testing them with various stability techniques (such as finding the norm) is entirely the wrong approach when dealing with high-order methods. Very seldom are high-order boundary closures stable, making them difficult to isolate. An alternative approach is to begin with a norm which satisfies all the stability criteria for the hyperbolic system, and look for the boundary closure forms which will match the norm exactly. This method was used recently by Strand to isolate stable boundary closure schemes for the explicit central fourth- and sixth-order schemes. The norm used was an energy norm mimicking the norm for the differential equations. Further research should be devoted to BC for high order schemes in order to make sure that the results obtained are reliable. The compact fourth order and sixth order finite difference scheme had been incorporated into a code to simulate flow past circular cylinders. This code will serve as a verification of the full spectral codes. A detailed stability analysis by Carpenter (from the fluid Mechanics Division) and Gottlieb gave analytic conditions for stability as well as asymptotic stability. This had been incorporated in the code in form of stable boundary conditions. Effects of the cylinder rotations had been studied. The results differ from the known theoretical results. We are in the middle of analyzing the results. A detailed analysis of the effects of the heating of the cylinder on the shedding frequency had been studied using the above schemes. It has been found that the shedding frequency decreases when the wire was heated. Experimental work is being carried out to affirm this result.

  14. 21 CFR 177.1560 - Polyarylsulfone resins.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... polymer units. The copolymers have a minimum reduced viscosity of 0.40 deciliter per gram in 1-methyl-2... Solution Viscosity of Polymers,” which is incorporated by reference. Copies may be obtained from the...

  15. 21 CFR 177.1585 - Polyestercarbonate resins.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... solution intrinsic viscosity of the polyestercarbonate resins shall be a minimum of 0.44 deciliter per gram, as determined by a method entitled “Intrinsic Viscosity (IV) of Lexan ® Polyestercarbonate Resin by a...

  16. 21 CFR 177.1560 - Polyarylsulfone resins.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... polymer units. The copolymers have a minimum reduced viscosity of 0.40 deciliter per gram in 1-methyl-2... Solution Viscosity of Polymers,” which is incorporated by reference. Copies may be obtained from the...

  17. 21 CFR 177.1585 - Polyestercarbonate resins.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... solution intrinsic viscosity of the polyestercarbonate resins shall be a minimum of 0.44 deciliter per gram, as determined by a method entitled “Intrinsic Viscosity (IV) of Lexan ® Polyestercarbonate Resin by a...

  18. 21 CFR 177.1560 - Polyarylsulfone resins.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... polymer units. The copolymers have a minimum reduced viscosity of 0.40 deciliter per gram in 1-methyl-2... Solution Viscosity of Polymers,” which is incorporated by reference. Copies may be obtained from the...

  19. 21 CFR 177.1585 - Polyestercarbonate resins.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... solution intrinsic viscosity of the polyestercarbonate resins shall be a minimum of 0.44 deciliter per gram, as determined by a method entitled “Intrinsic Viscosity (IV) of Lexan ® Polyestercarbonate Resin by a...

  20. 21 CFR 177.1560 - Polyarylsulfone resins.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... polymer units. The copolymers have a minimum reduced viscosity of 0.40 deciliter per gram in 1-methyl-2... Solution Viscosity of Polymers,” which is incorporated by reference. Copies may be obtained from the...

  1. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  2. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  3. Optimal Time-decay Estimates for the Compressible Navier-Stokes Equations in the Critical L p Framework

    NASA Astrophysics Data System (ADS)

    Danchin, Raphaël; Xu, Jiang

    2017-04-01

    The global existence issue for the isentropic compressible Navier-Stokes equations in the critical regularity framework was addressed in Danchin (Invent Math 141(3):579-614, 2000) more than 15 years ago. However, whether (optimal) time-decay rates could be shown in critical spaces has remained an open question. Here we give a positive answer to that issue not only in the L 2 critical framework of Danchin (Invent Math 141(3):579-614, 2000) but also in the general L p critical framework of Charve and Danchin (Arch Ration Mech Anal 198(1):233-271, 2010), Chen et al. (Commun Pure Appl Math 63(9):1173-1224, 2010), Haspot (Arch Ration Mech Anal 202(2):427-460, 2011): we show that under a mild additional decay assumption that is satisfied if, for example, the low frequencies of the initial data are in {L^{p/2}(Rd)}, the L p norm (the slightly stronger dot B^0_{p,1} norm in fact) of the critical global solutions decays like t^{-d(1/p - 1/4} for {tto+∞,} exactly as firstly observed by Matsumura and Nishida in (Proc Jpn Acad Ser A 55:337-342, 1979) in the case p = 2 and d = 3, for solutions with high Sobolev regularity. Our method relies on refined time weighted inequalities in the Fourier space, and is likely to be effective for other hyperbolic/parabolic systems that are encountered in fluid mechanics or mathematical physics.

  4. The wet solidus of silica: predictions from the scaled particle theory and polarized continuum model.

    PubMed

    Ottonello, G; Richet, P; Vetuschi Zuccolini, M

    2015-02-07

    We present an application of the Scaling Particle Theory (SPT) coupled with an ab initio assessment of the electronic, dispersive, and repulsive energy terms based on the Polarized Continuum Model (PCM) aimed at reproducing the observed solubility behavior of OH2 over the entire compositional range from pure molten silica to pure water and wide pressure and temperature regimes. It is shown that the solution energy is dominated by cavitation terms, mainly entropic in nature, which cause a large negative solution entropy and a consequent marked increase of gas phase fugacity with increasing temperatures. Besides, the solution enthalpy is negative and dominated by electrostatic terms which depict a pseudopotential well whose minimum occurs at a low water fraction (XH2O) of about 6 mol. %. The fine tuning of the solute-solvent interaction is achieved through very limited adjustments of the electrostatic scaling factor γel which, in pure water, is slightly higher than the nominal value (i.e., γel  =  1.224 against 1.2), it attains its minimum at low H2O content (γel = 0.9958) and then rises again at infinite dilution (γel   =  1.0945). The complex solution behavior is interpreted as due to the formation of energetically efficient hydrogen bonding when OH functionals are in appropriate amount and relative positioning with respect to the discrete OH2 molecules, reinforcing in this way the nominal solute-solvent inductive interaction. The interaction energy derived from the SPT-PCM calculations is then recast in terms of a sub-regular Redlich-Kister expansion of appropriate order whereas the thermodynamic properties of the H2O component at its standard state (1-molal solution referred to infinite dilution) are calculated from partial differentiation of the solution energy over the intensive variables.

  5. Impact of Norm Perceptions and Guilt on Audience Response to Anti-Smoking Norm PSAs: The Case of Korean Male Smokers

    ERIC Educational Resources Information Center

    Lee, Hyegyu; Paek, Hye-Jin

    2013-01-01

    Objective: To examine how norm appeals and guilt influence smokers' behavioural intention. Design: Quasi-experimental design. Setting: South Korea. Method: Two hundred and fifty-five male smokers were randomly assigned to descriptive, injunctive, or subjective anti-smoking norm messages. After they viewed the norm messages, their norm perceptions,…

  6. Overview of NORM and activities by a NORM licensed permanent decontamination and waste processing facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirro, G.A.

    1997-02-01

    This paper presents an overview of issues related to handling NORM materials, and provides a description of a facility designed for the processing of NORM contaminated equipment. With regard to handling NORM materials the author discusses sources of NORM, problems, regulations and disposal options, potential hazards, safety equipment, and issues related to personnel protection. For the facility, the author discusses: description of the permanent facility; the operations of the facility; the license it has for handling specific radioactive material; operating and safety procedures; decontamination facilities on site; NORM waste processing capabilities; and offsite NORM services which are available.

  7. A case study on the formation and sharing process of science classroom norms

    NASA Astrophysics Data System (ADS)

    Chang, Jina; Song, Jinwoong

    2016-03-01

    The teaching and learning of science in school are influenced by various factors, including both individual factors, such as member beliefs, and social factors, such as the power structure of the class. To understand this complex context affected by various factors in schools, we investigated the formation and sharing process of science classroom norms in connection with these factors. By examining the developmental process of science classroom norms, we identified how the norms were realized, shared, and internalized among the members. We collected data through classroom observations and interviews focusing on two elementary science classrooms in Korea. From these data, factors influencing norm formation were extracted and developed as stories about norm establishment. The results indicate that every science classroom norm was established, shared, and internalized differently according to the values ingrained in the norms, the agent of norm formation, and the members' understanding about the norm itself. The desirable norms originating from values in science education, such as having an inquiring mind, were not established spontaneously by students, but were instead established through well-organized norm networks to encourage concrete practice. Educational implications were discussed in terms of the practice of school science inquiry, cultural studies, and value-oriented education.

  8. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order.

    PubMed

    Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.

  9. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order

    PubMed Central

    Xia, Yangkun; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419

  10. Bodies obliged and unbound: differentiated response tendencies for injunctive and descriptive social norms.

    PubMed

    Jacobson, Ryan P; Mortensen, Chad R; Cialdini, Robert B

    2011-03-01

    The authors suggest that injunctive and descriptive social norms engage different psychological response tendencies when made selectively salient. On the basis of suggestions derived from the focus theory of normative conduct and from consideration of the norms' functions in social life, the authors hypothesized that the 2 norms would be cognitively associated with different goals, would lead individuals to focus on different aspects of self, and would stimulate different levels of conflict over conformity decisions. Additionally, a unique role for effortful self-regulation was hypothesized for each type of norm-used as a means to resist conformity to descriptive norms but as a means to facilitate conformity for injunctive norms. Four experiments supported these hypotheses. Experiment 1 demonstrated differences in the norms' associations to the goals of making accurate/efficient decisions and gaining/maintaining social approval. Experiment 2 provided evidence that injunctive norms lead to a more interpersonally oriented form of self-awareness and to a greater feeling of conflict about conformity decisions than descriptive norms. In the final 2 experiments, conducted in the lab (Experiment 3) and in a naturalistic environment (Experiment 4), self-regulatory depletion decreased conformity to an injunctive norm (Experiments 3 and 4) and increased conformity to a descriptive norm (Experiment 4)-even though the norms advocated identical behaviors. By illustrating differentiated response tendencies for each type of social norm, this research provides new and converging support for the focus theory of normative conduct. (c) 2011 APA, all rights reserved

  11. Reconstructing Norms

    ERIC Educational Resources Information Center

    Gorgorio, Nuria; Planas, Nuria

    2005-01-01

    Starting from the constructs "cultural scripts" and "social representations", and on the basis of the empirical research we have been developing until now, we revisit the construct norms from a sociocultural perspective. Norms, both sociomathematical norms and norms of the mathematical practice, as cultural scripts influenced…

  12. [Women's perceptions on intimate partner violence in Mexico].

    PubMed

    Agoff, Carolina; Rajsbaum, Ari; Herrera, Cristina

    2006-01-01

    To identify personal, cultural, and institutional factors that hinder the solution to domestic violence. In Quintana Roo, Coahuila, and Mexico City, 26 in-depth interviews with women currently suffering from intimate partner violence and others who had already found a solution were carried out, between May and November 2003. Among women's explanations to violence, it was possible to distinguish between causes (non intentional violence) and motives (intentional violence). Associated with these explanations, issues related to tolerance emerge, as well as attribution of responsibility. Moreover, the social ties of the women contribute to the acting out of gender roles and the justification or tolerance of conjugal abuse. The dominant values and norms of gender in society, shared by abused women and the community, are responsible for the perpetuation of intimate partner violence.

  13. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  14. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  15. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  16. Quadratic equations in Banach space, perturbation techniques and applications to Chandrasekhar's and related equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argyros, I.K.

    1984-01-01

    In this dissertation perturbation techniques are developed, based on the contraction mapping principle which can be used to prove existence and uniqueness for the quadratic equation x = y + lambdaB(x,x) (1) in a Banach space X; here B: XxX..-->..X is a bounded, symmetric bilinear operator, lambda is a positive parameter and y as a subset of X is fixed. The following is the main result. Theorem. Suppose F: XxX..-->..X is a bounded, symmetric bilinear operator and that the equation z = y + lambdaF(z,z) has a solution z/sup */ of sufficiently small norm. Then equation (1) has a uniquemore » solution in a certain closed ball centered at z/sup */. Applications. The theorem is applied to the famous Chandrasekhar equation and to the Anselone-Moore system which are of the form (1) above and yields existence and uniqueness for a solution of (1) for larger values of lambda than previously known, as well as more accurate information on the location of solutions.« less

  17. Entire radial solutions of elliptic systems and inequalities of the mean curvature type

    NASA Astrophysics Data System (ADS)

    Filippucci, Roberta

    2007-10-01

    In this paper we study first nonexistence of radial entire solutions of elliptic systems of the mean curvature type with a singular or degenerate diffusion depending on the solution u. In particular we extend a previous result given in [R. Filippucci, Nonexistence of radial entire solutions of elliptic systems, J. Differential Equations 188 (2003) 353-389]. Moreover, in the scalar case we obtain nonexistence of all entire solutions, radial or not, of differential inequalities involving again operators of the mean curvature type and a diffusion term. We prove that in the scalar case, nonexistence of entire solutions is due to the explosion of the derivative of every nonglobal radial solution in the right extremum of the maximal interval of existence, while in that point the solution is bounded. This behavior is qualitatively different with respect to what happens for the m-Laplacian operator, studied in [R. Filippucci, Nonexistence of radial entire solutions of elliptic systems, J. Differential Equations 188 (2003) 353-389], where nonexistence of entire solutions is due, even in the vectorial case, to the explosion in norm of the solution at a finite point. Our nonexistence theorems for inequalities extend previous results given by Naito and Usami in [YE Naito, H. Usami, Entire solutions of the inequality div(A(=u)=u)[greater-or-equal, slanted]f(u), Math. Z. 225 (1997) 167-175] and Ghergu and Radulescu in [M. Ghergu, V. Radulescu, Existence and nonexistence of entire solutions to the logistic differential equation, Abstr. Appl. Anal. 17 (2003) 995-1003].

  18. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  19. Feedback brake distribution control for minimum pitch

    NASA Astrophysics Data System (ADS)

    Tavernini, Davide; Velenis, Efstathios; Longo, Stefano

    2017-06-01

    The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.

  20. Nonlinear Schrödinger equations with single power nonlinearity and harmonic potential

    NASA Astrophysics Data System (ADS)

    Cipolatti, R.; de Macedo Lira, Y.; Trallero-Giner, C.

    2018-03-01

    We consider a generalized nonlinear Schrödinger equation (GNLS) with a single power nonlinearity of the form λ ≤ft\\vert \\varphi \\right\\vert p , with p  >  0 and λ\\in{R} , in the presence of a harmonic confinement. We report the conditions that p and λ must fulfill for the existence and uniqueness of ground states of the GNLS. We discuss the Cauchy problem and summarize which conditions are required for the nonlinear term λ ≤ft\\vert \\varphi \\right\\vert p to render the ground state solutions orbitally stable. Based on a new variational method we provide exact formulæ for the minimum energy for each index p and the changing range of values of the nonlinear parameter λ. Also, we report an approximate close analytical expression for the ground state energy, performing a comparative analysis of the present variational calculations with those obtained by a generalized Thomas-Fermi approach, and soliton solutions for the respective ranges of p and λ where these solutions can be implemented to describe the minimum energy.

  1. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  2. Efficacy of tumescent local anesthesia with variable lidocaine concentration in 3430 consecutive cases of liposuction.

    PubMed

    Habbema, Louis

    2010-06-01

    Lidocaine toxicity is a potential complication related to using tumescent local anesthesia (TLA) as the exclusive form of pain management in surgical procedures. We sought to determine the minimum concentration of lidocaine in the tumescent solution required to provide adequate anesthesia in patients undergoing liposuction using TLA exclusively. Liposuction using TLA exclusively was performed in 3430 procedures by the same surgeon. The initial concentration of 1000 mg/L lidocaine in the tumescent solution was gradually reduced to find the minimum required for adequate anesthesia. Adequate anesthesia was achieved using a lidocaine concentration of 500 mg/L saline in all areas treated and 400 mg/L saline for most of the areas treated. Data are based on the specific TLA technique used by the same surgeon. Lidocaine serum levels were not analyzed. For patients undergoing liposuction using TLA exclusively, the concentration of lidocaine in the normal saline solution required for adequate anesthesia is 400 mg/L for most body areas and 500 mg/L for some sensitive areas. Copyright 2009 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  3. Peer Group Norms and Accountability Moderate the Effect of School Norms on Children's Intergroup Attitudes

    ERIC Educational Resources Information Center

    McGuire, Luke; Rutland, Adam; Nesdale, Drew

    2015-01-01

    The present study examined the interactive effects of school norms, peer norms, and accountability on children's intergroup attitudes. Participants (n = 229) aged 5-11 years, in a between-subjects design, were randomly assigned to a peer group with an inclusion or exclusion norm, learned their school either had an inclusion norm or not, and were…

  4. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

  5. Psychometric Properties of the Eating Disorder Examination Questionnaire (EDE-Q) and Norms for Rural and Urban Adolescent Males and Females in Mexico

    PubMed Central

    Penelo, Eva; Raich, Rosa M.

    2013-01-01

    Aims To contribute new evidence to the controversy about the factor structure of the Eating Disorder Examination Questionnaire (EDE-Q) and to provide, for the first time, norms based on a large adolescent Mexican community sample, regarding sex and area of residence (urban/rural). Methods A total of 2928 schoolchildren (1544 females and 1384 males) aged 11-18 were assessed with the EDE-Q and other disordered eating questionnaire measures. Results Confirmatory factor analysis of the attitudinal items of the EDE-Q did not support the four theorized subscales, and a two-factor solution, Restraint and Eating-Shape-Weight concern, showed better fit than the other models examined (RMSEA = .054); measurement invariance for this two-factor model across sex and area of residence was found. Satisfactory internal consistency (ω ≥ .80) and two-week test-retest reliability (ICCa ≥ .84; κ ≥ .56), and evidence for convergent validity with external measures was obtained. The highest attitudinal EDE-Q scores were found for urban females and the lowest scores were found for rural males, whereas the occurrence of key eating disorder behavioural features and compensatory behaviours was similar in both areas of residence. Conclusions This study reveals satisfactory psychometric properties and provides population norms of the EDE-Q, which may help clinicians and researchers to interpret the EDE-Q scores of adolescents from urban and rural areas in Mexico. PMID:24367587

  6. Stabilization of the SIESTA MHD Equilibrium Code Using Rapid Cholesky Factorization

    NASA Astrophysics Data System (ADS)

    Hirshman, S. P.; D'Azevedo, E. A.; Seal, S. K.

    2016-10-01

    The SIESTA MHD equilibrium code solves the discretized nonlinear MHD force F ≡ J X B - ∇p for a 3D plasma which may contain islands and stochastic regions. At each nonlinear evolution step, it solves a set of linearized MHD equations which can be written r ≡ Ax - b = 0, where A is the linearized MHD Hessian matrix. When the solution norm | x| is small enough, the nonlinear force norm will be close to the linearized force norm | r| 0 obtained using preconditioned GMRES. In many cases, this procedure works well and leads to a vanishing nonlinear residual (equilibrium) after several iterations in SIESTA. In some cases, however, | x|>1 results and the SIESTA code has to be restarted to obtain nonlinear convergence. In order to make SIESTA more robust and avoid such restarts, we have implemented a new rapid QR factorization of the Hessian which allows us to rapidly and accurately solve the least-squares problem AT r = 0, subject to the condition | x|<1. This avoids large contributions to the nonlinear force terms and in general makes the convergence sequence of SIESTA much more stable. The innovative rapid QR method is based on a pairwise row factorization of the tri-diagonal Hessian. It provides a complete Cholesky factorization while preserving the memory allocation of A. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.

  7. Segmentation of DTI based on tensorial morphological gradient

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  8. Naturally occurring radioactive material (NORM) from a former phosphoric acid processing plant.

    PubMed

    Beddow, H; Black, S; Read, D

    2006-01-01

    In recent years there has been an increasing awareness of the radiological impact of non-nuclear industries that extract and/or process ores and minerals containing naturally occurring radioactive material (NORM). These industrial activities may result in significant radioactive contamination of (by-) products, wastes and plant installations. In this study, scale samples were collected from a decommissioned phosphoric acid processing plant. To determine the nature and concentration of NORM retained in pipe-work and associated process plant, four main areas of the site were investigated: (1) the 'Green Acid Plant', where crude acid was concentrated; (2) the green acid storage tanks; (3) the Purified White Acid (PWA) plant, where inorganic impurities were removed; and (4) the solid waste, disposed of on-site as landfill. The scale samples predominantly comprise the following: fluorides (e.g. ralstonite); calcium sulphate (e.g. gypsum); and an assemblage of mixed fluorides and phosphates (e.g. iron fluoride hydrate, calcium phosphate), respectively. The radioactive inventory is dominated by 238U and its decay chain products, and significant fractionation along the series occurs. Compared to the feedstock ore, elevated concentrations (< or =8.8 Bq/g) of 238U were found to be retained in installations where the process stream was rich in fluorides and phosphates. In addition, enriched levels (< or =11 Bq/g) of 226Ra were found in association with precipitates of calcium sulphate. Water extraction tests indicate that many of the scales and waste contain significantly soluble materials and readily release radioactivity into solution.

  9. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    PubMed Central

    Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong

    2018-01-01

    Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307

  10. Current Trends in the study of Gender Norms and Health Behaviors

    PubMed Central

    Fleming, Paul J.; Agnew-Brune, Christine

    2015-01-01

    Gender norms are recognized as one of the major social determinants of health and gender norms can have implications for an individual’s health behaviors. This paper reviews the recent advances in research on the role of gender norms on health behaviors most associated with morbidity and mortality. We find that (1) the study of gender norms and health behaviors is varied across different types of health behaviors, (2) research on masculinity and masculine norms appears to have taken on an increasing proportion of studies on the relationship between gender norms and health, and (3) we are seeing new and varied populations integrated into the study of gender norms and health behaviors. PMID:26075291

  11. 1-norm support vector novelty detection and its sparseness.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. High resolution beamforming on large aperture vertical line arrays: Processing synthetic data

    NASA Astrophysics Data System (ADS)

    Tran, Jean-Marie Q.; Hodgkiss, William S.

    1990-09-01

    This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.

  13. 21 CFR 177.1200 - Cellophane.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Clay, natural Coconut oil fatty acid (C12-C18) diethanolamide, coconut oil fatty acid (C12-C18... acetate Do. Polyvinyl alcohol (minimum viscosity of 4 percent aqueous solution at 20 °C of 4 centipoises...

  14. Data inversion immune to cycle-skipping using AWI

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Umpleby, A.; Yao, G.; Morgan, J. V.

    2014-12-01

    Over the last decade, 3D Full Waveform Inversion (FWI) has become a standard model-building tool in exploration seismology, especially in oil and gas applications -thanks to the high quality (spatial density of sources and receivers) datasets acquired by the industry. FWI provides superior quantitative images than its travel-time counterparts (travel-time based inversion methods) because it aims to match all the information in the observations instead of a severely restricted subset of them, namely picked arrivals.The downside is that the solution space explored by FWI has a high number of local minima, and since the solution is restricted to local optimization methods (due to the objective function evaluation cost), the success of the inversion is subject to starting within the basin of attraction of the global minimum.Local minima can exist for a wide variety of reasons, and it seems unlikely that a formulation of the problem that can eliminate all of them -by defining the optimization problem in a form that results in a monotonic objective function- exist. However, a significant amount of local minima are created by the definition of data misfit. In its standard formulation FWI compares observed data (field data) with predicted data (generated with a synthetic model) by subtracting one from the other, and the objective function is defined as some norm of this difference. The combination of this criteria and the fact that seismic data is oscillatory produces the well-known phenomenon of cycle-skipping, where model updates try to match nearest cycles from one dataset to the other.In order to avoid cycle-skipping we propose a different comparison between observed and predicted data, based on Wiener filters, which exploits the fact that the "identity" Wiener filter is a spike at zero lag. This gives rise to a new objective function without cycle-skipped related local minima, and therefore suppress the need of accurate starting models or low frequencies in the data. This new technique, called Adaptive Waveform Inversion (AWI) appears always superior to conventional FWI.

  15. The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents.

    PubMed

    Quigley, Jody; Rasmussen, Susan; McAlaney, John

    2017-03-15

    Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals' engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman's ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants' reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends' norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered.

  16. The Social Norms of Suicidal and Self-Harming Behaviours in Scottish Adolescents

    PubMed Central

    Quigley, Jody; Rasmussen, Susan; McAlaney, John

    2017-01-01

    Although the suicidal and self-harming behaviour of individuals is often associated with similar behaviours in people they know, little is known about the impact of perceived social norms on those behaviours. In a range of other behavioural domains (e.g., alcohol consumption, smoking, eating behaviours) perceived social norms have been found to strongly predict individuals’ engagement in those behaviours, although discrepancies often exist between perceived and reported norms. Interventions which align perceived norms more closely with reported norms have been effective in reducing damaging behaviours. The current study aimed to explore whether the Social Norms Approach is applicable to suicidal and self-harming behaviours in adolescents. Participants were 456 pupils from five Scottish high-schools (53% female, mean age = 14.98 years), who completed anonymous, cross-sectional surveys examining reported and perceived norms around suicidal and self-harming behaviour. Friedman’s ANOVA with post-hoc Wilcoxen signed-ranks tests indicated that proximal groups were perceived as less likely to engage in or be permissive of suicidal and self-harming behaviours than participants’ reported themselves, whilst distal groups tended towards being perceived as more likely to do so. Binary logistic regression analyses identified a number of perceived norms associated with reported norms, with close friends’ norms positively associated with all outcome variables. The Social Norms Approach may be applicable to suicidal and self-harming behaviour, but associations between perceived and reported norms and predictors of reported norms differ to those found in other behavioural domains. Theoretical and practical implications of the findings are considered. PMID:28294999

  17. Performance of Dutch children on the Bayley III: a comparison study of US and Dutch norms.

    PubMed

    Steenis, Leonie J P; Verhoeven, Marjolein; Hessen, Dave J; van Baar, Anneloes L

    2015-01-01

    The Bayley Scales of Infant and Toddler Development-third edition (Bayley-III) are frequently used to assess early child development worldwide. However, the original standardization only included US children, and it is still unclear whether or not these norms are adequate for use in other populations. Recently, norms for the Dutch version of the Bayley-III (The Bayley-III-NL) were made. Scores based on Dutch and US norms were compared to study the need for population-specific norms. Scaled scores based on Dutch and US norms were compared for 1912 children between 14 days and 42 months 14 days. Next, the proportions of children scoring < 1-SD and < -2 SD based on the two norms were compared, to identify over- or under-referral for developmental delay resulting from non-population-based norms. Scaled scores based on Dutch norms fluctuated around values based on US norms on all subtests. The extent of the deviations differed across ages and subtests. Differences in means were significant across all five subtests (p < .01) with small to large effect sizes (ηp2) ranging from .03 to .26). Using the US instead of Dutch norms resulted in over-referral regarding gross motor skills, and under-referral regarding cognitive, receptive communication, expressive communication, and fine motor skills. The Dutch norms differ from the US norms for all subtests and these differences are clinically relevant. Population specific norms are needed to identify children with low scores for referral and intervention, and to facilitate international comparisons of population data.

  18. Plasmon excitations with a semi-integer angular momentum.

    PubMed

    Mendonça, J T; Serbeto, A; Vieira, J

    2018-05-18

    We provide an explicit model for a spin-1/2 quasi-particle, based on the superposition of plasmon excitations in a quantum plasmas with intrinsic orbital angular momentum. Such quasi-particle solutions can show remarkable similarities with single electrons moving in vacuum: they have spin-1/2, a finite rest mass, and a quantum dispersion. We also show that these quasi-particle solutions satisfy a criterium of energy minimum.

  19. 46 CFR Table 151.05 to Subpart 151... - Summary of Minimum Requirements

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., see Phenol Cresylate spent caustic Atmos. Amb. III 1 i i 2 i Integral Gravity Open Open II G-1 NR Vent N No .50-73.55-1(b) NA NA G Cresylic acid, sodium salt solution, see Cresylate spent caustic.... II G-2 NR Vent N Yes .50-73 NA NA G Caustic potash solution Atmos. Amb.Elev. III 1 i 2 i Integral...

  20. Strategies for Choosing Descent Flight-Path Angles for Small Jets

    NASA Technical Reports Server (NTRS)

    Wu, Minghong Gilbert; Green, Steven M.

    2012-01-01

    Three candidate strategies for choosing the descent flight path angle (FPA) for small jets are proposed, analyzed, and compared for fuel efficiency under arrival metering conditions. The strategies vary in operational complexity from a universally fixed FPA, or FPA function that varies with descent speed for improved fuel efficiency, to the minimum-fuel FPA computed for each flight based on winds, route, and speed profile. Methodologies for selecting the parameter for the first two strategies are described. The differences in fuel burn are analyzed over a year s worth of arrival traffic and atmospheric conditions recorded for the Dallas/Fort Worth (DFW) Airport during 2011. The results show that the universally fixed FPA strategy (same FPA for all flights, all year) burns on average 26 lbs more fuel per flight as compared to the minimum-fuel solution. This FPA is adapted to the arrival gate (direction of entry to the terminal) and various timespans (season, month and day) to improve fuel efficiency. Compared to a typical FPA of approximately 3 degrees the adapted FPAs vary significantly, up to 1.3 from one arrival gate to another or up to 1.4 from one day to another. Adapting the universally fixed FPA strategy to the arrival gate or to each day reduces the extra fuel burn relative to the minimum-fuel solution by 27% and 34%, respectively. The adaptations to gate and time combined shows up to 57% reduction of the extra fuel burn. The second strategy, an FPA function, contributes a 17% reduction in the 26 lbs of extra fuel burn over the universally fixed FPA strategy. Compared to the corresponding adaptations of the universally fixed FPA, adaptations of the FPA function reduce the extra fuel burn anywhere from 15-23% depending on the extent of adaptation. The combined effect of the FPA function strategy with both directional and temporal adaptation recovers 67% of the extra fuel relative to the minimum-fuel solution.

Top