Sample records for constrained minimum variance

  1. The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey

    DTIC Science & Technology

    2004-05-10

    aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well

  2. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  3. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  4. Beamforming approaches for untethered, ultrasonic neural dust motes for cortical recording: a simulation study.

    PubMed

    Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M

    2014-01-01

    In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.

  5. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  6. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  7. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  8. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    PubMed

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  9. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  10. 75 FR 40797 - Upper Peninsula Power Company; Notice of Application for Temporary Amendment of License and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    ... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...

  11. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  12. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  13. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  14. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  15. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  16. Minimum-Cost Aircraft Descent Trajectories with a Constrained Altitude Profile

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G.; Sadovsky, Alexander V.

    2015-01-01

    An analytical formula for solving the speed profile that accrues minimum cost during an aircraft descent with a constrained altitude profile is derived. The optimal speed profile first reaches a certain speed, called the minimum-cost speed, as quickly as possible using an appropriate extreme value of thrust. The speed profile then stays on the minimum-cost speed as long as possible, before switching to an extreme value of thrust for the rest of the descent. The formula is applied to an actual arrival route and its sensitivity to winds and airlines' business objectives is analyzed.

  17. Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.

    PubMed

    Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong

    2014-09-01

    A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

  18. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  19. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  20. The Particle Size Distribution in Saturn’s C Ring from UVIS and VIMS Stellar Occultations and RSS Radio Occultations

    NASA Astrophysics Data System (ADS)

    Jerousek, Richard Gregory; Colwell, Josh; Hedman, Matthew M.; French, Richard G.; Marouf, Essam A.; Esposito, Larry; Nicholson, Philip D.

    2017-10-01

    The Cassini Ultraviolet Imaging Spectrograph (UVIS) and Visual and Infrared Mapping Spectrometer (VIMS) have measured ring optical depths over a wide range of viewing geometries at effective wavelengths of 0.15 μm and 2.9 μm respectively. Using Voyager S and X band radio occultations and the direct inversion of the forward scattered S band signal, Marouf et al. (1982), (1983), and Zebker et al. (1985) determined the power-law size distribution parameters assuming a minimum particle radius of 1 mm. Many further studies have also constrained aspects of the particle size distribution throughout the main rings. Marouf et al. (2008a) determined the smallest ring particles to have radii of 4-5 mm using Cassini RSS data. Harbison et al. (2013) used VIMS solar occultations and also found minimum particle sizes of 4-5 mm in the C ring with q ~ 3.1, where n(a)da=Ca^(-q)da is the assumed differential power-law size distribution for particles of radius a. Recent studies of excess variance in stellar signal by Colwell et al. (2017, submitted) constrain the cross-section-weighted effective particle radius to 1 m to several meters. Using the wide range of viewing geometries available to VIMS and UVIS stellar occultations we find that normal optical depth does not strongly depend on viewing geometry at 10km resolution (which would be the case if self-gravity wakes were present). Throughout the C ring, we fit power-law derived optical depths to those measured by UVIS, VIMS, and by the Cassini Radio Science Subsystem (RSS) at 0.94 and 3.6 cm wavelengths to constrain the four parameters of the size distribution at 10km radial resolution. We find significant amounts of particle size sorting throughout the region with a positive correlation between maximum particles size (amax) and normal optical depth with a mean value of amax ~ 3 m in the background C ring. This correlation is negative in the C ring plateaus. We find an inverse correlation in minimum particle radius with normal optical depth and a mean value of amin ~ 4 mm in the background C ring with slightly larger smallest particles in the C ring plateaus.

  1. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  2. An apparent contradiction: increasing variability to achieve greater precision?

    PubMed

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  3. 76 FR 1145 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...

  4. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  5. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  6. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  7. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  8. Survival of primary condylar-constrained total knee arthroplasty at a minimum of 7 years.

    PubMed

    Maynard, Lance M; Sauber, Timothy J; Kostopoulos, Vasileios K; Lavigne, Gregory S; Sewecke, Jeffrey J; Sotereanos, Nicholas G

    2014-06-01

    The purpose of the present study is to retrospectively analyze clinical and radiographic outcomes in primary constrained condylar knee arthroplasty at a minimum follow-up of 7 years. Given the concern for early aseptic loosening in constrained implants, we focused on this outcome. Our cohort consists of 127 constrained condylar knees. The mean age of patients in the study was 68.3 years, with a mean follow-up of 110.7 months. The diagnosis was primary osteoarthritis in 92%. There were four periprosthetic distal femur fractures, with a rate of revision of 0.8%. No implants were revised for aseptic loosening. Kaplan-Meier survivorship analysis with removal of any component as the end point revealed that the 10-year rate of survival of the primary CCK was 97.6% (95% CI, 94%-100%). Copyright © 2014. Published by Elsevier Inc.

  9. CMB-S4 and the hemispherical variance anomaly

    NASA Astrophysics Data System (ADS)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  10. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  11. Synthesis of correlation filters: a generalized space-domain approach for improved filter characteristics

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.

    1990-12-01

    Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.

  12. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  13. Applications of active adaptive noise control to jet engines

    NASA Technical Reports Server (NTRS)

    Shoureshi, Rahmat; Brackney, Larry

    1993-01-01

    During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.

  14. Large amplitude MHD waves upstream of the Jovian bow shock

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.

    1983-01-01

    Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.

  15. Sex-specific genetic variance and the evolution of sexual dimorphism: a systematic review of cross-sex genetic correlations.

    PubMed

    Poissant, Jocelyn; Wilson, Alastair J; Coltman, David W

    2010-01-01

    The independent evolution of the sexes may often be constrained if male and female homologous traits share a similar genetic architecture. Thus, cross-sex genetic covariance is assumed to play a key role in the evolution of sexual dimorphism (SD) with consequent impacts on sexual selection, population dynamics, and speciation processes. We compiled cross-sex genetic correlations (r(MF)) estimates from 114 sources to assess the extent to which the evolution of SD is typically constrained and test several specific hypotheses. First, we tested if r(MF) differed among trait types and especially between fitness components and other traits. We also tested the theoretical prediction of a negative relationship between r(MF) and SD based on the expectation that increases in SD should be facilitated by sex-specific genetic variance. We show that r(MF) is usually large and positive but that it is typically smaller for fitness components. This demonstrates that the evolution of SD is typically genetically constrained and that sex-specific selection coefficients may often be opposite in sign due to sub-optimal levels of SD. Most importantly, we confirm that sex-specific genetic variance is an important contributor to the evolution of SD by validating the prediction of a negative correlation between r(MF) and SD.

  16. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  17. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  18. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  19. Determining Size Distribution at the Phoenix Landing Site

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.

    2016-12-01

    Dust aerosols play a crucial role in determining atmospheric radiative heating on Mars through absorption and scattering of sunlight. How dust scatters and absorbs light is dependent on size, shape, composition, and quantity. Optical properties of the dust have been well constrained in the visible and near infrared wavelengths using various methods [Wolff et al. 2009, Lemmon et al. 2004]. In addition, the dust is nonspherical, and irregular shapes have shown to work well in determining effective particle size [Pollack et al. 1977]. Variance of the size distribution is less constrained but constitutes an important parameter in fully describing the dust. The Phoenix Lander's Surface Stereo Imager performed several cross-sky brightness surveys to determine the size distribution and scattering properties of dust in the wavelength range of 400 to 1000 nm. In combination with a single-layer radiative transfer model, these surveys can be used to help constrain variance of the size distribution. We will present a discussion of seasonal size distribution as it pertains to the Phoenix landing site.

  20. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  1. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  2. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  3. Analysis of 20 magnetic clouds at 1 AU during a solar minimum

    NASA Astrophysics Data System (ADS)

    Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.

    We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH

  4. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  5. Helicopter Control Energy Reduction Using Moving Horizontal Tail

    PubMed Central

    Oktay, Tugrul; Sal, Firat

    2015-01-01

    Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841

  6. Minimum energy control and optimal-satisfactory control of Boolean control network

    NASA Astrophysics Data System (ADS)

    Li, Fangfei; Lu, Xiwen

    2013-12-01

    In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis, Alfredo

    The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.

  8. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  9. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  10. Multiscale field-aligned current analyzer

    NASA Astrophysics Data System (ADS)

    Bunescu, C.; Marghitu, O.; Constantinescu, D.; Narita, Y.; Vogt, J.; Blǎgǎu, A.

    2015-11-01

    The magnetosphere-ionosphere coupling is achieved, essentially, by a superposition of quasi-stationary and time-dependent field-aligned currents (FACs), over a broad range of spatial and temporal scales. The planarity of the FAC structures observed by satellite data and the orientation of the planar FAC sheets can be investigated by the well-established minimum variance analysis (MVA) of the magnetic perturbation. However, such investigations are often constrained to a predefined time window, i.e., to a specific scale of the FAC. The multiscale field-aligned current analyzer, introduced here, relies on performing MVA continuously and over a range of scales by varying the width of the analyzing window, appropriate for the complexity of the magnetic field signatures above the auroral oval. The proposed technique provides multiscale information on the planarity and orientation of the observed FACs. A new approach, based on the derivative of the largest eigenvalue of the magnetic variance matrix with respect to the length of the analysis window, makes possible the inference of the current structures' location (center) and scale (thickness). The capabilities of the FAC analyzer are explored analytically for the magnetic field profile of the Harris sheet and tested on synthetic FAC structures with uniform current density and infinite or finite geometry in the cross-section plane of the FAC. The method is illustrated with data observed by the Cluster spacecraft on crossing the nightside auroral region, and the results are cross checked with the optical observations from the Time History of Events and Macroscale Interactions during Substorms ground network.

  11. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  12. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.

    PubMed

    Cohen, Michael X; Ridderinkhof, K Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz), followed by a later alpha-band (8-12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.

  13. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  14. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  15. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  16. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  17. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    PubMed

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  18. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

    PubMed Central

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985

  19. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  20. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  1. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  2. Robust fuzzy control subject to state variance and passivity constraints for perturbed nonlinear systems with multiplicative noises.

    PubMed

    Chang, Wen-Jer; Huang, Bo-Jyun

    2014-11-01

    The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  4. Effects of important parameters variations on computing eigenspace-based minimum variance weights for ultrasound tissue harmonic imaging

    NASA Astrophysics Data System (ADS)

    Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-02-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.

  5. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  6. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  7. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  8. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  9. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  10. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  11. Some refinements on the comparison of areal sampling methods via simulation

    Treesearch

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  12. A comparison of coronal and interplanetary current sheet inclinations

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.

    1983-01-01

    The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.

  13. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  14. Minimum Variance Distortionless Response Beamformer with Enhanced Nulling Level Control via Dynamic Mutated Artificial Immune System

    PubMed Central

    Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136

  15. Minimum variance distortionless response beamformer with enhanced nulling level control via dynamic mutated artificial immune system.

    PubMed

    Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.

  16. 25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...

  17. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.

  18. A test of source-surface model predictions of heliospheric current sheet inclination

    NASA Technical Reports Server (NTRS)

    Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.

    1994-01-01

    The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.

  19. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  1. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  2. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  3. EEG Source Reconstruction Reveals Frontal-Parietal Dynamics of Spatial Conflict Processing

    PubMed Central

    Cohen, Michael X; Ridderinkhof, K. Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30–50 Hz), followed by a later alpha-band (8–12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4–8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions. PMID:23451201

  4. Sensor networks for optimal target localization with bearings-only measurements in constrained three-dimensional scenarios.

    PubMed

    Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin

    2013-08-12

    In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix) to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.

  5. The Angular Power Spectrum of BATSE 3B Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Tegmark, Max; Hartmann, Dieter H.; Briggs, Michael S.; Meegan, Charles A.

    1996-01-01

    We compute the angular power spectrum C(sub l) from the BATSE 3B catalog of 1122 gamma-ray bursts and find no evidence for clustering on any scale. These constraints bridge the entire range from small scales (which probe source clustering and burst repetition) to the largest scales (which constrain possible anisotropics from the Galactic halo or from nearby cosmological large-scale structures). We develop an analysis technique that takes the angular position errors into account. For specific clustering or repetition models, strong upper limits can be obtained down to scales l approx. equal to 30, corresponding to a couple of degrees on the sky. The minimum-variance burst weighting that we employ is visualized graphically as an all-sky map in which each burst is smeared out by an amount corresponding to its position uncertainty. We also present separate bandpass-filtered sky maps for the quadrupole term and for the multipole ranges l = 3-10 and l = 11-30, so that the fluctuations on different angular scales can be inspected separately for visual features such as localized 'hot spots' or structures aligned with the Galactic plane. These filtered maps reveal no apparent deviations from isotropy.

  6. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  7. Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado

    USGS Publications Warehouse

    Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.

    2003-01-01

    The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.

  8. Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region

    NASA Astrophysics Data System (ADS)

    Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.

    2005-08-01

    Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.

  9. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  10. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  11. Extremely preterm children exhibit increased interhemispheric connectivity for language: findings from fMRI-constrained MEG analysis.

    PubMed

    Barnes-Davis, Maria E; Merhar, Stephanie L; Holland, Scott K; Kadis, Darren S

    2018-04-16

    Children born extremely preterm are at significant risk for cognitive impairment, including language deficits. The relationship between preterm birth and neurological changes that underlie cognitive deficits is poorly understood. We use a stories-listening task in fMRI and MEG to characterize language network representation and connectivity in children born extremely preterm (n = 15, <28 weeks gestation, ages 4-6 years), and in a group of typically developing control participants (n = 15, term birth, 4-6 years). Participants completed a brief neuropsychological assessment. Conventional fMRI analyses revealed no significant differences in language network representation across groups (p > .05, corrected). The whole-group fMRI activation map was parcellated to define the language network as a set of discrete nodes, and the timecourse of neuronal activity at each position was estimated using linearly constrained minimum variance beamformer in MEG. Virtual timecourses were subjected to connectivity and network-based analyses. We observed significantly increased beta-band functional connectivity in extremely preterm compared to controls (p < .05). Specifically, we observed an increase in connectivity between left and right perisylvian cortex. Subsequent effective connectivity analyses revealed that hyperconnectivity in preterms was due to significantly increased information flux originating from the right hemisphere (p < 0.05). The total strength and density of the language network were not related to language or nonverbal performance, suggesting that the observed hyperconnectivity is a "pure" effect of prematurity. Although our extremely preterm children exhibited typical language network architecture, we observed significantly altered network dynamics, indicating reliance on an alternative neural strategy for the language task. © 2018 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  12. Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models

    NASA Astrophysics Data System (ADS)

    Styron, R. H.; Hetland, E. A.

    2014-12-01

    Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).

  13. A Study of the Southern Ocean: Mean State, Eddy Genesis & Demise, and Energy Pathways

    NASA Astrophysics Data System (ADS)

    Zajaczkovski, Uriel

    The Southern Ocean (SO), due to its deep penetrating jets and eddies, is well-suited for studies that combine surface and sub-surface data. This thesis explores the use of Argo profiles and sea surface height ( SSH) altimeter data from a statistical point of view. A linear regression analysis of SSH and hydrographic data reveals that the altimeter can explain, on average, about 35% of the variance contained in the hydrographic fields and more than 95% if estimated locally. Correlation maxima are found at mid-depth, where dynamics are dominated by geostrophy. Near the surface, diabatic processes are significant, and the variance explained by the altimeter is lower. Since SSH variability is associated with eddies, the regression of SSH with temperature (T) and salinity (S) shows the relative importance of S vs T in controlling density anomalies. The AAIW salinity minimum separates two distinct regions; above the minimum density changes are dominated by T, while below the minimum S dominates over T. The regression analysis provides a method to remove eddy variability, effectively reducing the variance of the hydrographic fields. We use satellite altimetry and output from an assimilating numerical model to show that the SO has two distinct eddy motion regimes. North and south of the Antarctic Circumpolar Current (ACC), eddies propagate westward with a mean meridional drift directed poleward for cyclonic eddies (CEs) and equatorward for anticyclonic eddies (AEs). Eddies formed within the boundaries of the ACC have an effective eastward propagation with respect to the mean deep ACC flow, and the mean meridional drift is reversed, with warm-core AEs propagating poleward and cold-core CEs propagating equatorward. This circulation pattern drives downgradient eddy heat transport, which could potentially transport a significant fraction (24 to 60 x 1013 W) of the net poleward ACC eddy heat flux. We show that the generation of relatively large amplitude eddies is not a ubiquitous feature of the SO but rather a phenomenon that is constrained to five isolated, well-defined "hotspots". These hotspots are located downstream of major topographic features, with their boundaries closely following f/H contours. Eddies generated in these locations show no evidence of a bias in polarity and decay within the boundaries of the generation area. Eddies tend to disperse along f/H contours rather than following lines of latitude. We found enhanced values of both buoyancy (BP) and shear production (SP) inside the hotspots, with BP one order of magnitude larger than SP. This is consistent with baroclinic instability being the main mechanism of eddy generation. The mean potential density field estimated from Argo floats shows that inside the hotspots, isopycnal slopes are steep, indicating availability of potential energy. The hotspots identified in this thesis overlap with previously identified regions of standing meanders. We provide evidence that hotspot locations can be explained by the combined effect of topography, standing meanders that enhance baroclinic instability, and availability of potential energy to generate eddies via baroclinic instabilities.

  14. Field scale lysimeters to assess nutrient management impacts on runoff

    USDA-ARS?s Scientific Manuscript database

    Most empirical studies on the impact of field management on runoff water quality rely on edge-of-field monitoring, which is generally unreplicated and prone to high variances, or small plots, which constrain the use of conventional farm equipment and can hinder insight into landscape processes drivi...

  15. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  16. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  17. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  18. On the design of classifiers for crop inventories

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Takacs, H. C.

    1986-01-01

    Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.

  19. Guidance and Control Architecture Design and Demonstration for Low Ballistic Coefficient Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Swei, Sean

    2014-01-01

    We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.

  20. Constrained coding for the deep-space optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B. E.; Hamkins, J.

    2002-01-01

    We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.

  1. Variability of Mars' North Polar Water Ice Cap: I. Analysis of Mariner 9 and Viking Orbiter Imaging Data

    USGS Publications Warehouse

    Bass, Deborah S.; Herkenhoff, Kenneth; Paige, David A.

    2000-01-01

    Previous studies interpreted differences in ice coverage between Mariner 9 and Viking Orbiter observations of Mars' north residual polar cap as evidence of interannual variability of ice deposition on the cap. However, these investigators did not consider the possibility that there could be significant changes in the ice coverage within the northern residual cap over the course of the summer season. Our more comprehensive analysis of Mariner 9 and Viking Orbiter imaging data shows that the appearance of the residual cap does not show large-scale variance on an interannual basis. Rather we find evidence that regions that were dark at the beginning of summer look bright by the end of summer and that this seasonal variation of the cap repeats from year to year. Our results suggest that this brightening was due to the deposition of newly formed water ice on the surface. We find that newly formed ice deposits in the summer season have the same red-to-violet band image ratios as permanently bright deposits within the residual cap. We believe the newly formed ice accumulates in a continuous layer. To constrain the minimum amount of deposited ice, we used observed albedo data in conjunction with calculations using Mie theory for single scattering and a delta-Eddington approximation of radiative transfer for multiple scattering. The brightening could have been produced by a minimum of (1) a ~35-μm-thick layer of 50-μm-sized ice particles with 10% dust or (2) a ~14-μm-thick layer of 10-μm-sized ice particles with 50% dust.

  2. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  3. Minimum-variance Brownian motion control of an optically trapped probe.

    PubMed

    Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang

    2009-10-20

    This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.

  4. Cost effective stream-gaging strategies for the Lower Colorado River basin; the Blythe field office operations

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, Edward J.

    1980-01-01

    This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)

  5. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  6. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  7. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  8. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Charged particle tracking at Titan, and further applications

    NASA Astrophysics Data System (ADS)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly

    2016-04-01

    We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.

  10. Microstructure of the IMF turbulences at 2.5 AU

    NASA Technical Reports Server (NTRS)

    Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.

    1995-01-01

    A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.

  11. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  12. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  13. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    NASA Astrophysics Data System (ADS)

    Sorini, D.

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ``light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ~ 0.80 h Mpc-1 and within 10% up to k ~ 0.94 h Mpc-1, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  14. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  15. The genetic variance but not the genetic covariance of life-history traits changes towards the north in a time-constrained insect.

    PubMed

    Sniegula, Szymon; Golab, Maria J; Drobniak, Szymon M; Johansson, Frank

    2018-06-01

    Seasonal time constraints are usually stronger at higher than lower latitudes and can exert strong selection on life-history traits and the correlations among these traits. To predict the response of life-history traits to environmental change along a latitudinal gradient, information must be obtained about genetic variance in traits and also genetic correlation between traits, that is the genetic variance-covariance matrix, G. Here, we estimated G for key life-history traits in an obligate univoltine damselfly that faces seasonal time constraints. We exposed populations to simulated native temperatures and photoperiods and common garden environmental conditions in a laboratory set-up. Despite differences in genetic variance in these traits between populations (lower variance at northern latitudes), there was no evidence for latitude-specific covariance of the life-history traits. At simulated native conditions, all populations showed strong genetic and phenotypic correlations between traits that shaped growth and development. The variance-covariance matrix changed considerably when populations were exposed to common garden conditions compared with the simulated natural conditions, showing the importance of environmentally induced changes in multivariate genetic structure. Our results highlight the importance of estimating variance-covariance matrixes in environments that mimic selection pressures and not only trait variances or mean trait values in common garden conditions for understanding the trait evolution across populations and environments. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  16. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  17. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  18. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  19. Magnetic Footpoint Velocities: A Combination Of Minimum Energy Fit AndLocal Correlation Tracking

    NASA Astrophysics Data System (ADS)

    Belur, Ravindra; Longcope, D.

    2006-06-01

    Many numerical and time dependent MHD simulations of the solar atmosphererequire the underlying velocity fields which should be consistent with theinduction equation. Recently, Longcope (2004) introduced a new techniqueto infer the photospheric velocity field from sequence of vector magnetogramswhich are in agreement with the induction equation. The method, the Minimum Energy Fit (MEF), determines a set of velocities and selects the velocity which is smallest overall flow speed by minimizing an energy functional. The inferred velocity can be further constrained by information aboutthe velocity inferred from other techniques. With this adopted techniquewe would expect that the inferred velocity will be close to the photospheric velocity of magnetic footpoints. Here, we demonstrate that the inferred horizontal velocities from LCT can be used to constrain the MEFvelocities. We also apply this technique to actual vector magnetogramsequences and compare these velocities with velocities from LCT alone.This work is supported by DoD MURI and NSF SHINE programs.

  20. Planning maximally smooth hand movements constrained to nonplanar workspaces.

    PubMed

    Liebermann, Dario G; Krasovsky, Tal; Berman, Sigal

    2008-11-01

    The article characterizes hand paths and speed profiles for movements performed in a nonplanar, 2-dimensional workspace (a hemisphere of constant curvature). The authors assessed endpoint kinematics (i.e., paths and speeds) under the minimum-jerk model assumptions and calculated minimal amplitude paths (geodesics) and the corresponding speed profiles. The authors also calculated hand speeds using the 2/3 power law. They then compared modeled results with the empirical observations. In all, 10 participants moved their hands forward and backward from a common starting position toward 3 targets located within a hemispheric workspace of small or large curvature. Comparisons of modeled observed differences using 2-way RM-ANOVAs showed that movement direction had no clear influence on hand kinetics (p < .05). Workspace curvature affected the hand paths, which seldom followed geodesic lines. Constraining the paths to different curvatures did not affect the hand speed profiles. Minimum-jerk speed profiles closely matched the observations and were superior to those predicted by 2/3 power law (p < .001). The authors conclude that speed and path cannot be unambiguously linked under the minimum-jerk assumption when individuals move the hand in a nonplanar 2-dimensional workspace. In such a case, the hands do not follow geodesic paths, but they preserve the speed profile, regardless of the geometric features of the workspace.

  1. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  2. Constrained multiple indicator kriging using sequential quadratic programming

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Erhan Tercan, A.

    2012-11-01

    Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.

  3. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  4. A power analysis for multivariate tests of temporal trend in species composition.

    PubMed

    Irvine, Kathryn M; Dinger, Eric C; Sarr, Daniel

    2011-10-01

    Long-term monitoring programs emphasize power analysis as a tool to determine the sampling effort necessary to effectively document ecologically significant changes in ecosystems. Programs that monitor entire multispecies assemblages require a method for determining the power of multivariate statistical models to detect trend. We provide a method to simulate presence-absence species assemblage data that are consistent with increasing or decreasing directional change in species composition within multiple sites. This step is the foundation for using Monte Carlo methods to approximate the power of any multivariate method for detecting temporal trends. We focus on comparing the power of the Mantel test, permutational multivariate analysis of variance, and constrained analysis of principal coordinates. We find that the power of the various methods we investigate is sensitive to the number of species in the community, univariate species patterns, and the number of sites sampled over time. For increasing directional change scenarios, constrained analysis of principal coordinates was as or more powerful than permutational multivariate analysis of variance, the Mantel test was the least powerful. However, in our investigation of decreasing directional change, the Mantel test was typically as or more powerful than the other models.

  5. Reduction of variance in spectral estimates for correction of ultrasonic aberration.

    PubMed

    Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C

    2006-01-01

    A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.

  6. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  7. Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mainemer, C. I.

    1978-01-01

    The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.

  8. Propagation of error from parameter constraints in quantitative MRI: Example application of multiple spin echo T2 mapping.

    PubMed

    Lankford, Christopher L; Does, Mark D

    2018-02-01

    Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  10. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Evolution of resistance and tolerance to herbivores: testing the trade-off hypothesis.

    PubMed

    Kariñho-Betancourt, Eunice; Núñez-Farfán, Juan

    2015-01-01

    Background. To cope with their natural enemies, plants rely on resistance and tolerance as defensive strategies. Evolution of these strategies among natural population can be constrained by the absence of genetic variation or because of the antagonistic genetic correlation (trade-off) between them. Also, since plant defenses are integrated by several traits, it has been suggested that trade-offs might occur between specific defense traits. Methodology/Principal Findings. We experimentally assessed (1) the presence of genetic variance in tolerance, total resistance, and leaf trichome density as specific defense trait, (2) the extent of natural selection acting on plant defenses, and (3) the relationship between total resistance and leaf trichome density with tolerance to herbivory in the annual herb Datura stramonium. Full-sib families of D. stramonium were either exposed to natural herbivores (control) or protected from them by a systemic insecticide. We detected genetic variance for leaf trichome density, and directional selection acting on this character. However, we did not detect a negative significant correlation between tolerance and total resistance, or between tolerance and leaf trichome density. We argue that low levels of leaf damage by herbivores precluded the detection of a negative genetic correlation between plant defense strategies. Conclusions/Significance. This study provides empirical evidence of the independent evolution of plant defense strategies, and a defensive role of leaf trichomes. The pattern of selection should favor individuals with high trichomes density. Also, because leaf trichome density reduces damage by herbivores and possess genetic variance in the studied population, its evolution is not constrained.

  12. Evolution of resistance and tolerance to herbivores: testing the trade-off hypothesis

    PubMed Central

    Kariñho-Betancourt, Eunice

    2015-01-01

    Background. To cope with their natural enemies, plants rely on resistance and tolerance as defensive strategies. Evolution of these strategies among natural population can be constrained by the absence of genetic variation or because of the antagonistic genetic correlation (trade-off) between them. Also, since plant defenses are integrated by several traits, it has been suggested that trade-offs might occur between specific defense traits. Methodology/Principal Findings. We experimentally assessed (1) the presence of genetic variance in tolerance, total resistance, and leaf trichome density as specific defense trait, (2) the extent of natural selection acting on plant defenses, and (3) the relationship between total resistance and leaf trichome density with tolerance to herbivory in the annual herb Datura stramonium. Full-sib families of D. stramonium were either exposed to natural herbivores (control) or protected from them by a systemic insecticide. We detected genetic variance for leaf trichome density, and directional selection acting on this character. However, we did not detect a negative significant correlation between tolerance and total resistance, or between tolerance and leaf trichome density. We argue that low levels of leaf damage by herbivores precluded the detection of a negative genetic correlation between plant defense strategies. Conclusions/Significance. This study provides empirical evidence of the independent evolution of plant defense strategies, and a defensive role of leaf trichomes. The pattern of selection should favor individuals with high trichomes density. Also, because leaf trichome density reduces damage by herbivores and possess genetic variance in the studied population, its evolution is not constrained. PMID:25780756

  13. Change of a motor synergy for dampening hand vibration depending on a task difficulty.

    PubMed

    Togo, Shunta; Kagawa, Takahiro; Uno, Yoji

    2014-10-01

    The present study investigated the relationship between the number of usable degrees of freedom (DOFs) and joint coordination during a human-dampening hand vibration task. Participants stood on a platform generating an anterior-posterior directional oscillation and held a water-filled cup. Their usable DOFs were changed under the following conditions of limb constraint: (1) no constraint; (2) ankle constrained; and (3) ankle-knee constrained. Kinematic whole-body data were recorded using a three-dimensional position measurement system. The jerk of each body part was evaluated as an index of oscillation intensity. To quantify joint coordination, an uncontrolled manifold (UCM) analysis was applied and the variance of joints related to hand jerk divided into two components: a UCM component that did not affect hand jerk and an orthogonal (ORT) component that directly affected hand jerk. The results showed that hand jerk when the task used a cup filled with water was significantly smaller than when a cup containing stones was used, regardless of limb constraint condition. Thus, participants dampened their hand vibration utilizing usable joint DOFs. According to UCM analysis, increasing the oscillation velocity and the decrease in usable DOFs by the limb constraints led to an increase of total variance of the joints and the UCM component, indicating that a synergy-dampening hand vibration was enhanced. These results show that the variance of usable joint DOFs is more fitted to the UCM subspace when the joints are varied by increasing the velocity and limb constraints and suggest that humans adopt enhanced synergies to achieve more difficult tasks.

  14. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  15. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  16. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  17. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  18. Integrated identification, modeling and control with applications

    NASA Astrophysics Data System (ADS)

    Shi, Guojun

    This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.

  19. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  20. On methods of estimating cosmological bulk flows

    NASA Astrophysics Data System (ADS)

    Nusser, Adi

    2016-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.

  1. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, D., E-mail: sorini@mpia-hd.mpg.de

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. Anmore » analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc{sup −1} and within 10% up to k ∼ 0.94 h Mpc{sup −1}, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.« less

  2. Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.

    ERIC Educational Resources Information Center

    Glutting, Joseph J.; McDermott, Paul A.

    1990-01-01

    Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…

  3. A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using

  4. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  5. Low genetic variance in the duration of the incubation period in a collared flycatcher (Ficedula albicollis) population.

    PubMed

    Husby, Arild; Gustafsson, Lars; Qvarnström, Anna

    2012-01-01

    The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.

  6. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  7. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    PubMed

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  8. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  9. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, John; Greenberg, Steve; Rubinstein, Francis

    2000-09-30

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.

  10. Performance of time-varying predictors in multilevel models under an assumption of fixed or random effects.

    PubMed

    Baird, Rachel; Maxwell, Scott E

    2016-06-01

    Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Constraint elimination in dynamical systems

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  12. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  13. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  14. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  15. Extensions of output variance constrained controllers to hard constraints

    NASA Technical Reports Server (NTRS)

    Skelton, R.; Zhu, G.

    1989-01-01

    Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****

  16. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    PubMed

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  17. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  18. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  19. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  20. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  1. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  2. Multiple Signal Classification for Determining Direction of Arrival of Frequency Hopping Spread Spectrum Signals

    DTIC Science & Technology

    2014-03-27

    42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM

  3. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  4. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  5. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  6. Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods

    NASA Astrophysics Data System (ADS)

    Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.

    2017-12-01

    The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement estimates for the Basin's faults. Not only do the improved depth estimates serve as a proxy to the viability of hydrocarbon exploration efforts in the region, but the improved displacement estimates also provide a better understanding of extension accommodation within the Malawi Rift.

  7. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  8. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  9. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  10. SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shojaei, M; Dumitru, N; Pella, S

    2016-06-15

    Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less

  11. Interdecadal modulation of El Niño teleconnection on monsoon Asia climate over the past five centuries

    NASA Astrophysics Data System (ADS)

    Li, J.; Xie, S. P.

    2017-12-01

    The El Niño influence on monsoon Asia climate weakened during the mid-20th century and strenthened substantially after the late 1970s. Exploring the nature of such an interdecadal variation is constrained by short instrumental records. Here we synthesize the Indo-Pacific tree-rings and coral records to reconstruct monsoon Asia temperature and moisture change during the past five centuries, and show that the interdecadal modulation of El Niño teleconnection on monsoon Asia climate is a robust feature beyond the instrumenal era. Comparison with proxy El Niño records indicates that the El Niño-monsoon Asia climate teleconnection is controlled by interdecadal changes in ENSO variance, with strong (weak) teleconnection in periods of high (low) variance, respectively.

  12. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  13. Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach

    USGS Publications Warehouse

    Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.

    1999-01-01

    Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.

  14. Design optimization and probabilistic analysis of a hydrodynamic journal bearing

    NASA Technical Reports Server (NTRS)

    Liniecki, Alexander G.

    1990-01-01

    A nonlinear constrained optimization of a hydrodynamic bearing was performed yielding three main variables: radial clearance, bearing length to diameter ratio, and lubricating oil viscosity. As an objective function a combined model of temperature rise and oil supply has been adopted. The optimized model of the bearing has been simulated for population of 1000 cases using Monte Carlo statistical method. It appeared that the so called 'optimal solution' generated more than 50 percent of failed bearings, because their minimum oil film thickness violated stipulated minimum constraint value. As a remedy change of oil viscosity is suggested after several sensitivities of variables have been investigated.

  15. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  16. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  17. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  18. Density and lithospheric structure at Tyrrhena Patera, Mars, from gravity and topography data

    NASA Astrophysics Data System (ADS)

    Grott, M.; Wieczorek, M. A.

    2012-09-01

    The Tyrrhena Patera highland volcano, Mars, is associated with a relatively well localized gravity anomaly and we have carried out a localized admittance analysis in the region to constrain the density of the volcanic load, the load thickness, and the elastic thickness at the time of load emplacement. The employed admittance model considers loading of an initially spherical surface, and surface as well as subsurface loading is taken into account. Our results indicate that the gravity and topography data available at Tyrrhena Patera is consistent with the absence of subsurface loading, but the presence of a small subsurface load cannot be ruled out. We obtain minimum load densities of 2960 kg m-3, minimum load thicknesses of 5 km, and minimum load volumes of 0.6 × 106 km3. Photogeological evidence suggests that pyroclastic deposits make up at most 30% of this volume, such that the bulk of Tyrrhena Patera is likely composed of competent basalt. Best fitting model parameters are a load density of 3343 kg m-3, a load thickness of 10.8 km, and a load volume of 1.7 × 106 km3. These relatively large load densities indicate that lava compositions are comparable to those at other martian volcanoes, and densities are comparable to those of the martian meteorites. The elastic thickness in the region is constrained to be smaller than 27.5 km at the time of loading, indicating surface heat flows in excess of 24 mW m-2.

  19. CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk

    2016-02-20

    In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less

  20. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  1. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  2. Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong

    One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.

  3. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  4. Constraining Earthquake Source Parameters in Rupture Patches and Rupture Barriers on Gofar Transform Fault, East Pacific Rise from Ocean Bottom Seismic Data

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.

    2015-12-01

    On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.

  5. The contribution of the mitochondrial genome to sex-specific fitness variance.

    PubMed

    Smith, Shane R T; Connallon, Tim

    2017-05-01

    Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  6. U and Th isotope constraints on the duration of Heinrich events H0-H4 in the southeastern Labrador Sea

    NASA Astrophysics Data System (ADS)

    Veiga-Pires, C. C.; Hillaire-Marcel, C.

    1999-04-01

    The duration and sequence of events recorded in Heinrich layers at sites near the Hudson Strait source area for ice-rafted material are still poorly constrained, notably because of the limit and uncertainties of the 14C chronology. Here we use high-resolution 230Th-excess measurements, in a 6 m sequence raised from Orphan Knoll (southern Labrador Sea), to constrain the duration of the deposition of the five most recent Heinrich (H) layers. On the basis of maximum/minimum estimates for the mean glacial 230Th-excess flux at the studied site a minimum/maximum duration of 1.0/0.6, 1.4/0.8, 1.3/0.8, 1.5/0.9, and 2.1/1.3 kyr is obtained for H0 (˜Younger Dryas), Hl, H2, H3, and H4, respectively. Thorium-230-excess inventories and other sedimentological features indicate a reduced but still significant lateral sedimentary supply by the Western Boundary Undercurrent during the glacial interval. U and Th series systematics also provide insights into source rocks of H layer sediments (i.e., into distal Irminger Basin/local Labrador Sea supplies).

  7. Examining ecological validity in social interaction: problems of visual fidelity, gaze, and social potential.

    PubMed

    Reader, Arran T; Holmes, Nicholas P

    2016-01-01

    Social interaction is an essential part of the human experience, and much work has been done to study it. However, several common approaches to examining social interactions in psychological research may inadvertently either unnaturally constrain the observed behaviour by causing it to deviate from naturalistic performance, or introduce unwanted sources of variance. In particular, these sources are the differences between naturalistic and experimental behaviour that occur from changes in visual fidelity (quality of the observed stimuli), gaze (whether it is controlled for in the stimuli), and social potential (potential for the stimuli to provide actual interaction). We expand on these possible sources of extraneous variance and why they may be important. We review the ways in which experimenters have developed novel designs to remove these sources of extraneous variance. New experimental designs using a 'two-person' approach are argued to be one of the most effective ways to develop more ecologically valid measures of social interaction, and we suggest that future work on social interaction should use these designs wherever possible.

  8. Robust, Adaptive Radar Detection and Estimation

    DTIC Science & Technology

    2015-07-21

    cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched

  9. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  10. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less

  11. A Measurement of the Cosmic Microwave Background Gravitational Lensing Potential from 100 Square Degrees of SPTpol Data

    NASA Astrophysics Data System (ADS)

    Story, K. T.; Hanson, D.; Ade, P. A. R.; Aird, K. A.; Austermann, J. E.; Beall, J. A.; Bender, A. N.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Chiang, H. C.; Cho, H.-M.; Citron, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W.; Gallicchio, J.; Gao, J.; George, E. M.; Gilbert, A.; Halverson, N. W.; Harrington, N.; Henning, J. W.; Hilton, G. C.; Holder, G. P.; Holzapfel, W. L.; Hoover, S.; Hou, Z.; Hrubes, J. D.; Huang, N.; Hubmayr, J.; Irwin, K. D.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Li, D.; Liang, C.; Luong-Van, D.; McMahon, J. J.; Mehl, J.; Meyer, S. S.; Mocanu, L.; Montroy, T. E.; Natoli, T.; Nibarger, J. P.; Novosad, V.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Smecher, G.; Stark, A. A.; Tucker, C.; Vanderlinde, K.; Vieira, J. D.; Wang, G.; Whitehorn, N.; Yefremenko, V.; Zahn, O.

    2015-09-01

    We present a measurement of the cosmic microwave background (CMB) gravitational lensing potential using data from the first two seasons of observations with SPTpol, the polarization-sensitive receiver currently installed on the South Pole Telescope. The observations used in this work cover 100 deg2 of sky with arcminute resolution at 150 GHz. Using a quadratic estimator, we make maps of the CMB lensing potential from combinations of CMB temperature and polarization maps. We combine these lensing potential maps to form a minimum-variance (MV) map. The lensing potential is measured with a signal-to-noise ratio of greater than one for angular multipoles between 100\\lt L\\lt 250. This is the highest signal-to-noise mass map made from the CMB to date and will be powerful in cross-correlation with other tracers of large-scale structure. We calculate the power spectrum of the lensing potential for each estimator, and we report the value of the MV power spectrum between 100\\lt L\\lt 2000 as our primary result. We constrain the ratio of the spectrum to a fiducial ΛCDM model to be AMV = 0.92 ± 0.14 (Stat.) ± 0.08 (Sys.). Restricting ourselves to polarized data only, we find APOL = 0.92 ± 0.24 (Stat.) ± 0.11 (Sys.). This measurement rejects the hypothesis of no lensing at 5.9σ using polarization data alone, and at 14σ using both temperature and polarization data.

  12. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Mars Observer trajectory and orbit design

    NASA Technical Reports Server (NTRS)

    Beerer, Joseph G.; Roncoli, Ralph B.

    1991-01-01

    The Mars Observer launch, interplanetary, Mars orbit insertion, and mapping orbit designs are described. The design objective is to enable a near-maximum spacecraft mass to be placed in orbit about Mars. This is accomplished by keeping spacecraft propellant requirements to a minimum, selecting a minimum acceptable launch period, equalizing the spacecraft velocity change requirement at the beginning and end of the launch period, and constraining the orbit insertion maneuvers to be coplanar. The mapping orbit design objective is to provide the opportunity for global observation of the planet by the science instruments while facilitating the spacecraft design. This is realized with a sun-synchronous near-polar orbit whose ground-track pattern covers the planet at progressively finer resolution.

  14. Assessing the Minimum Number of Synchronization Triggers Necessary for Temporal Variance Compensation in Commercial Electroencephalography (EEG) Systems

    DTIC Science & Technology

    2012-09-01

    by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B

  15. Foreign Language Training in U.S. Undergraduate IB Programs: Are We Providing Students What They Need to Be Successful?

    ERIC Educational Resources Information Center

    Johnson, Jim

    2017-01-01

    A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…

  16. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  17. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  18. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  19. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  20. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    NASA Astrophysics Data System (ADS)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  1. The performance of matched-field track-before-detect methods using shallow-water Pacific data.

    PubMed

    Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem

    2002-07-01

    Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.

  2. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  3. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  4. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  5. Experimental joint quantum measurements with minimum uncertainty.

    PubMed

    Ringbauer, Martin; Biggerstaff, Devon N; Broome, Matthew A; Fedrizzi, Alessandro; Branciard, Cyril; White, Andrew G

    2014-01-17

    Quantum physics constrains the accuracy of joint measurements of incompatible observables. Here we test tight measurement-uncertainty relations using single photons. We implement two independent, idealized uncertainty-estimation methods, the three-state method and the weak-measurement method, and adapt them to realistic experimental conditions. Exceptional quantum state fidelities of up to 0.999 98(6) allow us to verge upon the fundamental limits of measurement uncertainty.

  6. A Theory of Cramer-Rao Bounds for Constrained Parametric Models

    DTIC Science & Technology

    2010-01-01

    reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...overly optimistic. This occurs frequently in communications when the signal- to-noise ratio (SNR) or data transmission size decreases. 43 3.1, then U ?(φ...space of UTHT , the LSE is BLUE and is given by dT θ̂CLS(x) = d Tθ1 + d TU ( UTQU )† UTHTC−1 ( x−Hθ1 ) (3.28) similar to (3.27) with variance dTU

  7. Relation between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi

    2002-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  8. Relation Between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  9. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  10. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  11. Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Roberts, D. Aaron

    2010-01-01

    We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  13. Experimental demonstration of quantum teleportation of a squeezed state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro

    2005-10-15

    Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less

  14. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  15. Probing primordial features with next-generation photometric and radio surveys

    NASA Astrophysics Data System (ADS)

    Ballardini, M.; Finelli, F.; Maartens, R.; Moscardini, L.

    2018-04-01

    We investigate the possibility of using future photometric and radio surveys to constrain the power spectrum of primordial fluctuations that is predicted by inflationary models with a violation of the slow-roll phase. We forecast constraints with a Fisher analysis on the amplitude of the parametrized features on ultra-large scales, in order to assess whether these could be distinguishable over the cosmic variance. We find that the next generation of photometric and radio surveys has the potential to test these models at a sensitivity better than current CMB experiments and that the synergy between galaxy and CMB observations is able to constrain models with many extra parameters. In particular, an SKA continuum survey with a huge sky coverage and a flux threshold of a few μJy could confirm the presence of a new phase in the early Universe at more than 3σ.

  16. Cosmicflows Constrained Local UniversE Simulations

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  17. Dispatching power system for preventive and corrective voltage collapse problem in a deregulated power system

    NASA Astrophysics Data System (ADS)

    Alemadi, Nasser Ahmed

    Deregulation has brought opportunities for increasing efficiency of production and delivery and reduced costs to customers. Deregulation has also bought great challenges to provide the reliability and security customers have come to expect and demand from the electrical delivery system. One of the challenges in the deregulated power system is voltage instability. Voltage instability has become the principal constraint on power system operation for many utilities. Voltage instability is a unique problem because it can produce an uncontrollable, cascading instability that results in blackout for a large region or an entire country. In this work we define a system of advanced analytical methods and tools for secure and efficient operation of the power system in the deregulated environment. The work consists of two modules; (a) contingency selection module and (b) a Security Constrained Optimization module. The contingency selection module to be used for voltage instability is the Voltage Stability Security Assessment and Diagnosis (VSSAD). VSSAD shows that each voltage control area and its reactive reserve basin describe a subsystem or agent that has a unique voltage instability problem. VSSAD identifies each such agent. VS SAD is to assess proximity to voltage instability for each agent and rank voltage instability agents for each contingency simulated. Contingency selection and ranking for each agent is also performed. Diagnosis of where, why, when, and what can be done to cure voltage instability for each equipment outage and transaction change combination that has no load flow solution is also performed. A security constrained optimization module developed solves a minimum control solvability problem. A minimum control solvability problem obtains the reactive reserves through action of voltage control devices that VSSAD determines are needed in each agent to obtain solution of the load flow. VSSAD makes a physically impossible recommendation of adding reactive generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.

  18. Statistical procedures for determination and verification of minimum reporting levels for drinking water methods.

    PubMed

    Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A

    2006-01-01

    The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.

  19. The effectiveness of texture analysis for mapping forest land using the panchromatic bands of Landsat 7, SPOT, and IRS imagery

    Treesearch

    Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco

    2002-01-01

    The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...

  20. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  1. Eigenspace-based minimum variance beamformer combined with Wiener postfilter for medical ultrasound imaging.

    PubMed

    Zeng, Xing; Chen, Cheng; Wang, Yuanyuan

    2012-12-01

    In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  3. The variance of length of stay and the optimal DRG outlier payments.

    PubMed

    Felder, Stefan

    2009-09-01

    Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.

  4. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  5. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  6. Performance of Language-Coordinated Collective Systems: A Study of Wine Recognition and Description

    PubMed Central

    Zubek, Julian; Denkiewicz, Michał; Dębska, Agnieszka; Radkowska, Alicja; Komorowska-Mach, Joanna; Litwin, Piotr; Stępień, Magdalena; Kucińska, Adrianna; Sitarska, Ewa; Komorowska, Krystyna; Fusaroli, Riccardo; Tylén, Kristian; Rączaszek-Leonardi, Joanna

    2016-01-01

    Most of our perceptions of and engagements with the world are shaped by our immersion in social interactions, cultural traditions, tools and linguistic categories. In this study we experimentally investigate the impact of two types of language-based coordination on the recognition and description of complex sensory stimuli: that of red wine. Participants were asked to taste, remember and successively recognize samples of wines within a larger set in a two-by-two experimental design: (1) either individually or in pairs, and (2) with or without the support of a sommelier card—a cultural linguistic tool designed for wine description. Both effectiveness of recognition and the kinds of errors in the four conditions were analyzed. While our experimental manipulations did not impact recognition accuracy, bias-variance decomposition of error revealed non-trivial differences in how participants solved the task. Pairs generally displayed reduced bias and increased variance compared to individuals, however the variance dropped significantly when they used the sommelier card. The effect of sommelier card reducing the variance was observed only in pairs, individuals did not seem to benefit from the cultural linguistic tool. Analysis of descriptions generated with the aid of sommelier cards shows that pairs were more coherent and discriminative than individuals. The findings are discussed in terms of global properties and dynamics of collective systems when constrained by different types of cultural practices. PMID:27729875

  7. Variation in Seed Germination of 134 Common Species on the Eastern Tibetan Plateau: Phylogenetic, Life History and Environmental Correlates

    PubMed Central

    Xu, Jing; Li, Wenlong; Zhang, Chunhui; Liu, Wei; Du, Guozhen

    2014-01-01

    Seed germination is a crucial stage in the life history of a species because it represents the pathway from adult to offspring, and it can affect the distribution and abundance of species in communities. In this study, we examined the effects of phylogenetic, life history and environmental factors on seed germination of 134 common species from an alpine/subalpine meadow on the eastern Tibetan Plateau. In one-way ANOVAs, phylogenetic groups (at or above order) explained 13.0% and 25.9% of the variance in germination percentage and mean germination time, respectively; life history attributes, such as seed size, dispersal mode, explained 3.7%, 2.1% of the variance in germination percentage and 6.3%, 8.7% of the variance in mean germination time, respectively; the environmental factors temperature and habitat explained 4.7%, 1.0% of the variance in germination percentage and 13.5%, 1.7% of the variance in mean germination time, respectively. Our results demonstrated that elevated temperature would lead to a significant increase in germination percentage and an accelerated germination. Multi-factorial ANOVAs showed that the three major factors contributing to differences in germination percentage and mean germination time in this alpine/subalpine meadow were phylogenetic attributes, temperature and seed size (explained 10.5%, 4.7% and 1.4% of the variance in germination percentage independently, respectively; and explained 14.9%, 13.5% and 2.7% of the variance in mean germination time independently, respectively). In addition, there were strong associations between phylogenetic group and life history attributes, and between life history attributes and environmental factors. Therefore, germination variation are constrained mainly by phylogenetic inertia in a community, and seed germination variation correlated with phylogeny is also associated with life history attributes, suggesting a role of niche adaptation in the conservation of germination variation within lineages. Meanwhile, selection can maintain the association between germination behavior and the environmental conditions within a lineage. PMID:24893308

  8. Constrained binary classification using ensemble learning: an application to cost-efficient targeted PrEP strategies.

    PubMed

    Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya

    2018-01-30

    Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  10. Hydrogeomorphic controls on hyporheic and riparian transport in two headwater mountain streams during base flow recession

    NASA Astrophysics Data System (ADS)

    Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.; Harman, Ciaran; Gooseff, Michael N.; Singha, Kamini

    2016-02-01

    Solute transport along riparian and hyporheic flow paths is broadly expected to respond to dynamic hydrologic forcing by streams, aquifers, and hillslopes. However, direct observation of these dynamic responses is lacking, as is the relative control of geologic setting as a control on responses to dynamic hydrologic forcing. We conducted a series of four stream solute tracer injections through base flow recession in each of two watersheds with contrasting valley morphology in the H.J. Andrews Experimental Forest, monitoring tracer concentrations in the stream and in a network of shallow riparian wells in each watershed. We found hyporheic mean arrival time, temporal variance, and fraction of stream water in the bedrock-constrained valley bottom and near large roughness elements in the wider valley bottom were not variable with discharge, suggesting minimal control by hydrologic forcing. Conversely, we observed increases in mean arrival time and temporal variance and decreasing fraction stream water with decreasing discharge near the hillslopes in the wider valley bottom. This may indicate changes in stream discharge and valley bottom hydrology control transport in less constrained locations. We detail five hydrogeomorphic responses to base flow recession to explain observed spatial and temporal patterns in the interactions between streams and their valley bottoms. Models able to account for the transition from geologically dominated processes in the near-stream subsurface to hydrologically dominated processes near the hillslope will be required to predict solute transport and fate in valley bottoms of headwater mountain streams.

  11. Guidance strategies and analysis for low thrust navigation

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1973-01-01

    A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.

  12. Impact of longitudinal flying qualities upon the design of a transport with active controls

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed with relaxed static stability. Additionally, a number of handling quality related design constants were studied with respect to their impact to the design.

  13. Concurrent schedules: Effects of time- and response-allocation constraints

    PubMed Central

    Davison, Michael

    1991-01-01

    Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632

  14. Finite element based stability-constrained weight minimization of sandwich composite ducts for airship applications

    NASA Astrophysics Data System (ADS)

    Khode, Urmi B.

    High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.

  15. Evidence for ultrafast outflows in radio-quiet AGNs - III. Location and energetics

    NASA Astrophysics Data System (ADS)

    Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.

    2012-05-01

    Using the results of a previous X-ray photoionization modelling of blueshifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this Letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval ˜0.0003-0.03 pc (˜ 102-104rs) from the central black hole, consistent with what is expected for accretion disc winds/outflows. The mass outflow rates are constrained between ˜0.01 and 1 M⊙ yr-1, corresponding to >rsim5-10 per cent of the accretion rates. The average lower/upper limits on the mechanical power are log? 42.6-44.6 erg s-1. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN cosmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyfert galaxies.

  16. Quadrupole deformation ({beta},{gamma}) of light {Lambda} hypernuclei in a constrained relativistic mean field model: Shape evolution and shape polarization effect of the {Lambda} hyperon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Bingnan; Zhao Enguang; Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000

    2011-07-15

    The shapes of light normal nuclei and {Lambda} hypernuclei are investigated in the ({beta},{gamma}) deformation plane by using a newly developed constrained relativistic mean field (RMF) model. As examples, the results of some C, Mg, and Si nuclei are presented and discussed in details. We found that for normal nuclei the present RMF calculations and previous Skyrme-Hartree-Fock models predict similar trends of the shape evolution with the neutron number increasing. But some quantitative aspects from these two approaches, such as the depth of the minimum and the softness in the {gamma} direction, differ a lot for several nuclei. For {Lambda}more » hypernuclei, in most cases, the addition of a {Lambda} hyperon alters slightly the location of the ground state minimum toward the direction of smaller {beta} and softer {gamma} in the potential energy surface E{approx}({beta},{gamma}). There are three exceptions, namely, {sub {Lambda}}{sup 13}C, {sub {Lambda}}{sup 23}C, and {sub {Lambda}}{sup 31}Si in which the polarization effect of the additional {Lambda} is so strong that the shapes of these three hypernuclei are drastically different from their corresponding core nuclei.« less

  17. On a Minimum Problem in Smectic Elastomers

    NASA Astrophysics Data System (ADS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-07-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.

  18. Uncertainty relation for the discrete Fourier transform.

    PubMed

    Massar, Serge; Spindel, Philippe

    2008-05-16

    We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.

  19. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  20. Concentration variance decay during magma mixing: a volcanic chronometer.

    PubMed

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  1. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  2. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  3. Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years

    NASA Astrophysics Data System (ADS)

    Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.

    2014-12-01

    Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.

  4. A new Method for Determining the Interplanetary Current-Sheet Local Orientation

    NASA Astrophysics Data System (ADS)

    Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.

    2003-03-01

    In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.

  5. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  6. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  7. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  8. Dynamic and Geometric Analyses of Nudaurelia capensis ωVirus Maturation Reveal the Energy Landscape of Particle Transitions

    PubMed Central

    Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.

    2014-01-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180

  9. Dynamic and geometric analyses of Nudaurelia capensis ω virus maturation reveal the energy landscape of particle transitions.

    PubMed

    Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E

    2014-04-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Mars double-aeroflyby free returns

    NASA Astrophysics Data System (ADS)

    Jesick, Mark

    2017-09-01

    Mars double-flyby free-return trajectories that pass twice through the Martian atmosphere are documented. This class of trajectories is advantageous for potential Mars atmospheric sample return missions because of its low geocentric energy at departure and arrival, because it would enable two sample collections at unique locations during different Martian seasons, and because of its lack of deterministic maneuvers. Free return opportunities are documented over Earth departure dates ranging from 2015 through 2100, with viable missions available every Earth-Mars synodic period. After constraining the maximum lift-to-drag ratio to be less than one, the minimum observed Earth departure hyperbolic excess speed is 3.23 km/s, the minimum Earth atmospheric entry speed is 11.42 km/s, and the minimum round-trip flight time is 805 days. An algorithm using simplified dynamics is developed along with a method to derive an initial estimate for trajectories in a more realistic dynamic model. Multiple examples are presented, including free returns that pass outside and inside of Mars's appreciable atmosphere.

  11. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  12. A New Look at Some Solar Wind Turbulence Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2006-01-01

    Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.

  13. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade

    PubMed Central

    Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.

    2011-01-01

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500

  14. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade.

    PubMed

    Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E

    2011-05-31

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.

  15. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle

    PubMed Central

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311

  16. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.

    PubMed

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.

  17. LCMV beamforming for a novel wireless local positioning system: a stationarity analysis

    NASA Astrophysics Data System (ADS)

    Tong, Hui; Zekavat, Seyed A.

    2005-05-01

    In this paper, we discuss the implementation of Linear Constrained Minimum Variance (LCMV) beamforming (BF) for a novel Wireless Local Position System (WLPS). WLPS main components are: (a) a dynamic base station (DBS), and (b) a transponder (TRX), both mounted on mobiles. WLPS might be considered as a node in a Mobile Adhoc NETwork (MANET). Each TRX is assigned an identification (ID) code. DBS transmits periodic short bursts of energy which contains an ID request (IDR) signal. The TRX transmits back its ID code (a signal with a limited duration) to the DBS as soon as it detects the IDR signal. Hence, the DBS receives non-continuous signals transmitted by TRX. In this work, we assume asynchronous Direct-Sequence Code Division Multiple Access (DS-CDMA) transmission from the TRX with antenna array/LCMV BF mounted at the DBS, and we discuss the implementation of the observed signal covariance matrix for LCMV BF. In LCMV BF, the observed covariance matrix should be estimated. Usually sample covariance matrix (SCM) is used to estimate this covariance matrix assuming a stationary model for the observed data which is the case in many communication systems. However, due to the non-stationary behavior of the received signal in WLPS systems, SCM does not lead to a high WLPS performance compared to even a conventional beamformer. A modified covariance matrix estimation method which utilizes the cyclostationarity property of WLPS system is introduced as a solution to this problem. It is shown that this method leads to a significant improvement in the WLPS performance.

  18. A measurement of the cosmic microwave background gravitational lensing potential from 100 square degrees of SPTpol data

    DOE PAGES

    Story, K. T.; Hanson, D.; Ade, P. A. R.; ...

    2015-08-28

    Here, we present a measurement of the cosmic microwave background (CMB) gravitational lensing potential using data from the first two seasons of observations with SPTpol, the polarization-sensitive receiver currently installed on the South Pole Telescope. The observations used in this work cover 100 deg 2 of sky with arcminute resolution at 150 GHz. Using a quadratic estimator, we make maps of the CMB lensing potential from combinations of CMB temperature and polarization maps. We combine these lensing potential maps to form a minimum-variance (MV) map. The lensing potential is measured with a signal-to-noise ratio of greater than one for angular multipoles betweenmore » $$100\\lt L\\lt 250$$. This is the highest signal-to-noise mass map made from the CMB to date and will be powerful in cross-correlation with other tracers of large-scale structure. We calculate the power spectrum of the lensing potential for each estimator, and we report the value of the MV power spectrum between $$100\\lt L\\lt 2000$$ as our primary result. We constrain the ratio of the spectrum to a fiducial ΛCDM model to be AMV = 0.92 ± 0.14 (Stat.) ± 0.08 (Sys.). Restricting ourselves to polarized data only, we find A POL = 0.92 ± 0.24 (Stat.) ± 0.11 (Sys.). This measurement rejects the hypothesis of no lensing at $$5.9\\sigma $$ using polarization data alone, and at $$14\\sigma $$ using both temperature and polarization data.« less

  19. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  20. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  1. Intraplate deformation, stress in the lithosphere and the driving mechanism for plate motions

    NASA Technical Reports Server (NTRS)

    Albee, Arden L.

    1993-01-01

    The initial research proposed was to use the predictions of geodynamical models of mantle flow, combined with geodetic observations of intraplate strain and stress, to better constrain mantle convection and the driving mechanism for plate motions and deformation. It is only now that geodetic observations of intraplate strain are becoming sufficiently well resolved to make them useful for substantial geodynamical inference to be made. A model of flow in the mantle that explains almost 90 percent of the variance in the observed longwavelength nonhydrostatic geoid was developed.

  2. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  3. Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland

    USGS Publications Warehouse

    Pluhowski, E.J.

    1981-01-01

    Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)

  4. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  5. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  6. Cosmic variance of the galaxy cluster weak lensing signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruen, D.; Seitz, S.; Becker, M. R.

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  7. Cosmic variance of the galaxy cluster weak lensing signal

    DOE PAGES

    Gruen, D.; Seitz, S.; Becker, M. R.; ...

    2015-04-13

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  8. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  9. Good initialization model with constrained body structure for scene text recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Anna; Wang, Guoyou; Dong, Yangbo

    2016-09-01

    Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.

  10. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  11. Response to selection while maximizing genetic variance in small populations.

    PubMed

    Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E

    2016-09-20

    Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.

  12. A high-resolution speleothem record of western equatorial Pacific rainfall: Implications for Holocene ENSO evolution

    NASA Astrophysics Data System (ADS)

    Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.

    2016-05-01

    The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.

  13. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  14. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  15. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  16. Plasma dynamics on current-carrying magnetic flux tubes

    NASA Technical Reports Server (NTRS)

    Swift, Daniel W.

    1992-01-01

    A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.

  17. Constraining Alternative Theories of Gravity Using Pulsar Timing Arrays

    NASA Astrophysics Data System (ADS)

    Cornish, Neil J.; O'Beirne, Logan; Taylor, Stephen R.; Yunes, Nicolás

    2018-05-01

    The opening of the gravitational wave window by ground-based laser interferometers has made possible many new tests of gravity, including the first constraints on polarization. It is hoped that, within the next decade, pulsar timing will extend the window by making the first detections in the nanohertz frequency regime. Pulsar timing offers several advantages over ground-based interferometers for constraining the polarization of gravitational waves due to the many projections of the polarization pattern provided by the different lines of sight to the pulsars, and the enhanced response to longitudinal polarizations. Here, we show that existing results from pulsar timing arrays can be used to place stringent limits on the energy density of longitudinal stochastic gravitational waves. However, unambiguously distinguishing these modes from noise will be very difficult due to the large variances in the pulsar-pulsar correlation patterns. Existing upper limits on the power spectrum of pulsar timing residuals imply that the amplitude of vector longitudinal (VL) and scalar longitudinal (SL) modes at frequencies of 1/year are constrained, AVL<4 ×10-16 and ASL<4 ×10-17, while the bounds on the energy density for a scale invariant cosmological background are ΩVLh2<4 ×10-11 and ΩSLh2<3 ×10-13.

  18. Constraining Alternative Theories of Gravity Using Pulsar Timing Arrays.

    PubMed

    Cornish, Neil J; O'Beirne, Logan; Taylor, Stephen R; Yunes, Nicolás

    2018-05-04

    The opening of the gravitational wave window by ground-based laser interferometers has made possible many new tests of gravity, including the first constraints on polarization. It is hoped that, within the next decade, pulsar timing will extend the window by making the first detections in the nanohertz frequency regime. Pulsar timing offers several advantages over ground-based interferometers for constraining the polarization of gravitational waves due to the many projections of the polarization pattern provided by the different lines of sight to the pulsars, and the enhanced response to longitudinal polarizations. Here, we show that existing results from pulsar timing arrays can be used to place stringent limits on the energy density of longitudinal stochastic gravitational waves. However, unambiguously distinguishing these modes from noise will be very difficult due to the large variances in the pulsar-pulsar correlation patterns. Existing upper limits on the power spectrum of pulsar timing residuals imply that the amplitude of vector longitudinal (VL) and scalar longitudinal (SL) modes at frequencies of 1/year are constrained, A_{VL}<4×10^{-16} and A_{SL}<4×10^{-17}, while the bounds on the energy density for a scale invariant cosmological background are Ω_{VL}h^{2}<4×10^{-11} and Ω_{SL}h^{2}<3×10^{-13}.

  19. Economic evaluation of flying-qualities design criteria for a transport configured with relaxed static stability

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed to take maximum advantage of relaxed static stability augmentation systems. Additionally, a number of handling qualities related design constants were studied with respect to their impact on the design.

  20. Optimization of Turkish Air Force SAR Units Forward Deployment Points for a Central Based SAR Force Structure

    DTIC Science & Technology

    2015-03-26

    Turkish Airborne Early Warning and Control (AEW& C ) aircraft in the combat arena. He examines three combat scenarios Turkey might encounter to cover and...to limited SAR assets, constrained budgets, logistic- maintenance problems, and high risk level of military flights. In recent years, the Turkish Air...model, Set Covering Location Problem (SCLP), defines the minimum number of SAR DPs to cover all fighter aircraft training areas (TAs). The second

  1. Effects of Visual Complexity and Sublexical Information in the Occipitotemporal Cortex in the Reading of Chinese Phonograms: A Single-Trial Analysis with MEG

    ERIC Educational Resources Information Center

    Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec

    2011-01-01

    We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…

  2. Evidence for Ultra-Fast Outflows in Radio-Quiet AGNs: III - Location and Energetics

    NASA Technical Reports Server (NTRS)

    Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.

    2012-01-01

    Using the results of a previous X-ray photo-ionization modelling of blue-shifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval approx.0.0003-0.03pc (approx.10(exp 2)-10(exp 4)tau(sub s) from the central black hole, consistent with what is expected for accretion disk winds/outflows. The mass outflow rates are constrained between approx.0.01- 1 Stellar Mass/y, corresponding to approx. or >5-10% of the accretion rates. The average lower-upper limits on the mechanical power are logE(sub K) approx. or = 42.6-44.6 erg/s. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN r.osmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyferts galaxies .

  3. The evens and odds of CMB anomalies

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; Kitazawa, N.; Lattanzi, M.; Mandolesi, N.; Natoli, P.; Sagnotti, A.

    2018-06-01

    The lack of power of large-angle CMB anisotropies is known to increase its statistical significance at higher Galactic latitudes, where a string-inspired pre-inflationary scale Δ can also be detected. Considering the Planck 2015 data, and relying largely on a Bayesian approach, we show that the effect is mostly driven by the even - ℓ harmonic multipoles with ℓ ≲ 20, which appear sizably suppressed in a way that is robust with respect to Galactic masking, along with the corresponding detections of Δ. On the other hand, the first odd - ℓ multipoles are only suppressed at high Galactic latitudes. We investigate this behavior in different sky masks, constraining Δ through even and odd multipoles, and we elaborate on possible implications. We include low- ℓ polarization data which, despite being noise-limited, help in attaining confidence levels of about 3 σ in the detection of Δ. We also show by direct forecasts that a future all-sky E-mode cosmic-variance-limited polarization survey may push the constraining power for Δ beyond 5 σ.

  4. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  5. Concentration variance decay during magma mixing: a volcanic chronometer

    PubMed Central

    Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.

    2015-01-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555

  6. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  7. Heritability of female extra-pair paternity rate in song sparrows (Melospiza melodia)

    PubMed Central

    Reid, Jane M.; Arcese, Peter; Sardell, Rebecca J.; Keller, Lukas F.

    2011-01-01

    The forces driving the evolution of extra-pair reproduction in socially monogamous animals remain widely debated and unresolved. One key hypothesis is that female extra-pair reproduction evolves through indirect genetic benefits, reflecting increased additive genetic value of extra-pair offspring. Such evolution requires that a female's propensity to produce offspring that are sired by an extra-pair male is heritable. However, additive genetic variance and heritability in female extra-pair paternity (EPP) rate have not been quantified, precluding accurate estimation of the force of indirect selection. Sixteen years of comprehensive paternity and pedigree data from socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia) showed significant additive genetic variance and heritability in the proportion of a female's offspring that was sired by an extra-pair male, constituting major components of the genetic architecture required for extra-pair reproduction to evolve through indirect additive genetic benefits. However, estimated heritabilities were moderately small (0.12 and 0.18 on the observed and underlying latent scales, respectively). The force of selection on extra-pair reproduction through indirect additive genetic benefits may consequently be relatively weak. However, the additive genetic variance and non-zero heritability observed in female EPP rate allow for multiple further genetic mechanisms to drive and constrain mating system evolution. PMID:20980302

  8. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  9. Testing physical models for dipolar asymmetry with CMB polarization

    NASA Astrophysics Data System (ADS)

    Contreras, D.; Zibin, J. P.; Scott, D.; Banday, A. J.; Górski, K. M.

    2017-12-01

    The cosmic microwave background (CMB) temperature anisotropies exhibit a large-scale dipolar power asymmetry. To determine whether this is due to a real, physical modulation or is simply a large statistical fluctuation requires the measurement of new modes. Here we forecast how well CMB polarization data from Planck and future experiments will be able to confirm or constrain physical models for modulation. Fitting several such models to the Planck temperature data allows us to provide predictions for polarization asymmetry. While for some models and parameters Planck polarization will decrease error bars on the modulation amplitude by only a small percentage, we show, importantly, that cosmic-variance-limited (and in some cases even Planck) polarization data can decrease the errors by considerably better than the expectation of √{2 } based on simple ℓ-space arguments. We project that if the primordial fluctuations are truly modulated (with parameters as indicated by Planck temperature data) then Planck will be able to make a 2 σ detection of the modulation model with 20%-75% probability, increasing to 45%-99% when cosmic-variance-limited polarization is considered. We stress that these results are quite model dependent. Cosmic variance in temperature is important: combining statistically isotropic polarization with temperature data will spuriously increase the significance of the temperature signal with 30% probability for Planck.

  10. Magnetic resonance image restoration via dictionary learning under spatially adaptive constraints.

    PubMed

    Wang, Shanshan; Xia, Yong; Dong, Pei; Feng, David Dagan; Luo, Jianhua; Huang, Qiu

    2013-01-01

    This paper proposes a spatially adaptive constrained dictionary learning (SAC-DL) algorithm for Rician noise removal in magnitude magnetic resonance (MR) images. This algorithm explores both the strength of dictionary learning to preserve image structures and the robustness of local variance estimation to remove signal-dependent Rician noise. The magnitude image is first separated into a number of partly overlapping image patches. The statistics of each patch are collected and analyzed to obtain a local noise variance. To better adapt to Rician noise, a correction factor is formulated with the local signal-to-noise ratio (SNR). Finally, the trained dictionary is used to denoise each image patch under spatially adaptive constraints. The proposed algorithm has been compared to the popular nonlocal means (NLM) filtering and unbiased NLM (UNLM) algorithm on simulated T1-weighted, T2-weighted and PD-weighted MR images. Our results suggest that the SAC-DL algorithm preserves more image structures while effectively removing the noise than NLM and it is also superior to UNLM at low noise levels.

  11. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  12. Evaluating climate change impacts on streamflow variability based on a multisite multivariate GCM downscaling method in the Jing River of China

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Jin, Jiming

    2017-11-01

    Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.

  13. Multi-Sensor Constrained Time Varying Emissions Estimation of Black Carbon: Attributing Urban and Fire Sources Globally

    NASA Astrophysics Data System (ADS)

    Cohen, J. B.

    2015-12-01

    The short lifetime and heterogeneous distribution of Black Carbon (BC) in the atmosphere leads to complex impacts on radiative forcing, climate, and health, and complicates analysis of its atmospheric processing and emissions. Two recent papers have estimated the global and regional emissions of BC using advanced statistical and computational methods. One used a Kalman Filter, including data from AERONET, NOAA, and other ground-based sources, to estimate global emissions of 17.8+/-5.6 Tg BC/year (with the increase attributable to East Asia, South Asia, Southeast Asia, and Eastern Europe - all regions which have had rapid urban, industrial, and economic expansion). The second additionally used remotely sensed measurements from MISR and a variance maximizing technique, uniquely quantifying fire and urban sources in Southeast Asia, as well as their large year-to-year variability over the past 12 years, leading to increases from 10% to 150%. These new emissions products, when run through our state-of-the art modelling system of chemistry, physics, transport, removal, radiation, and climate, match 140 ground stations and satellites better in both an absolute and a temporal sense. New work now further includes trace species measurements from OMI, which are used with the variance maximizing technique to constrain the types of emissions sources. Furthermore, land-use change and fire estimation products from MODIS are also included, which provide other constraints on the temporal and spatial nature of the variations of intermittent sources like fires or new permanent sources like expanded urbanization. This talk will introduce a new, top-down constrained, weekly varying BC emissions dataset, show that it produces a better fit with observations, and draw conclusions about the sources and impacts from urbanization one hand, and fires on another hand. Results specific to the Southeast and East Asia will demonstrate inter- and intra-annual variations, such as the function of the wet and dry seasons. Further, the impacts of missing data due to cloud coverage and of long-range transport from highly polluted areas to relatively clean downwind areas will be demonstrated. More general results will also be discussed in relation to the global anthropogenic aerosol distribution.

  14. Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells

    DTIC Science & Technology

    2017-11-01

    approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of

  15. An Analysis Of The Benefits And Application Of Earned Value Management (EVM) Project Management Techniques For Dod Programs That Do Not Meet Dod Policy Thresholds

    DTIC Science & Technology

    2017-12-01

    carefully to ensure only minimum information needed for effective management control is requested.  Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can

  16. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  17. Waveform-based spaceborne GNSS-R wind speed observation: Demonstration and analysis using UK TechDemoSat-1 data

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang

    2018-03-01

    This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.

  18. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  19. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  20. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  1. Demographics of an ornate box turtle population experiencing minimal human-induced disturbances

    USGS Publications Warehouse

    Converse, S.J.; Iverson, J.B.; Savidge, J.A.

    2005-01-01

    Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.

  2. Refining Southern California Geotherms Using Seismologic, Geologic, and Petrologic Constraints

    NASA Astrophysics Data System (ADS)

    Thatcher, W. R.; Chapman, D. S.; Allam, A. A.; Williams, C. F.

    2017-12-01

    Lithospheric deformation in tectonically active regions depends on the 3D distribution of rheology, which is in turn critically controlled by temperature. Under the auspices of the Southern California Earthquake Center (SCEC) we are developing a 3D Community Thermal Model (CTM) to constrain rheology and so better understand deformation processes within this complex but densely monitored and relatively well-understood region. The San Andreas transform system has sliced southern California into distinct blocks, each with characteristic lithologies, seismic velocities and thermal structures. Guided by the geometry of these blocks we use more than 250 surface heat-flow measurements to define 13 geographically distinct heat flow regions (HFRs). Model geotherms within each HFR are constrained by averages and variances of surface heat flow q0 and the 1D depth distribution of thermal conductivity (k) and radiogenic heat production (A), which are strongly dependent on rock type. Crustal lithologies are not always well known and we turn to seismic imaging for help. We interrogate the SCEC Community Velocity Model (CVM) to determine averages and variances of Vp, Vs and Vp/Vs versus depth within each HFR. We bound (A, k) versus depth by relying on empirical relations between seismic wave speed and rock type and laboratory and modeling methods relating (A, k) to rock type. Many 1D conductive geotherms for each HFR are allowed by the variances in surface heat flow and subsurface (A, k). An additional constraint on the lithosphere temperature field is provided by comparing lithosphere-asthenosphere boundary (LAB) depths identified seismologically with those defined thermally as the depth of onset of partial melting. Receiver function studies in Southern California indicate LAB depths that range from 40 km to 90 km. Shallow LAB depths are correlated with high surface heat flow and deep LAB with low heat flow. The much-restricted families of geotherms that intersect peridotite solidi at the seismological LAB depth in each region require that LAB temperatures lie between 1050 to 1250˚ C, a range that is consistent with a hydrous rather than anhydrous mantle below Southern California.

  3. Natural selection and inheritance of breeding time and clutch size in the collared flycatcher.

    PubMed

    Sheldon, B C; Kruuk, L E B; Merilä, J

    2003-02-01

    Many characteristics of organisms in free-living populations appear to be under directional selection, possess additive genetic variance, and yet show no evolutionary response to selection. Avian breeding time and clutch size are often-cited examples of such characters. We report analyses of inheritance of, and selection on, these traits in a long-term study of a wild population of the collared flycatcher Ficedula albicollis. We used mixed model analysis with REML estimation ("animal models") to make full use of the information in complex multigenerational pedigrees. Heritability of laying date, but not clutch size, was lower than that estimated previously using parent-offspring regressions, although for both traits there was evidence of substantial additive genetic variance (h2 = 0.19 and 0.29, respectively). Laying date and clutch size were negatively genetically correlated (rA = -0.41 +/- 0.09), implying that selection on one of the traits would cause a correlated response in the other, but there was little evidence to suggest that evolution of either trait would be constrained by correlations with other phenotypic characters. Analysis of selection on these traits in females revealed consistent strong directional fecundity selection for earlier breeding at the level of the phenotype (beta = -0.28 +/- 0.03), but little evidence for stabilising selection on breeding time. We found no evidence that clutch size was independently under selection. Analysis of fecundity selection on breeding values for laying date, estimated from an animal model, indicated that selection acts directly on additive genetic variance underlying breeding time (beta = -0.20 +/- 0.04), but not on clutch size (beta = 0.03 +/- 0.05). In contrast, selection on laying date via adult female survival fluctuated in sign between years, and was opposite in sign for selection on phenotypes (negative) and breeding values (positive). Our data thus suggest that any evolutionary response to selection on laying date is partially constrained by underlying life-history trade-offs, and illustrate the difficulties in using purely phenotypic measures and incomplete fitness estimates to assess evolution of life-history trade-offs. We discuss some of the difficulties associated with understanding the evolution of laying date and clutch size in natural populations.

  4. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  5. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  6. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  7. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  8. Optimization of Stability Constrained Geometrically Nonlinear Shallow Trusses Using an Arc Length Sparse Method with a Strain Energy Density Approach

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.; Nguyen, Duc T.

    2008-01-01

    A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.

  9. Computational strategies in the dynamic simulation of constrained flexible MBS

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.

    1993-01-01

    This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.

  10. Constraining Earth's Rheology of the Barents Sea Using Grace Gravity Change Observations

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Root, B. C.; Tarasov, L.

    2014-12-01

    The Barents Sea region was ice covered during last glacial maximum and experiences Glacial Isostatic Adjustment (GIA). Because of the limited amount of relevant geological and geodetic observations, it is difficult to constrain GIA models for this region. With improved ice sheet models and gravity observations from GRACE, it is possible to better constrain Earth rheology. This study aims to constrain the upper mantle viscosity and elastic lithosphere thickness from GRACE data in the Barents Sea region. The GRACE observations are corrected for current ice melting on Svalbard, Novaya Zemlya and Frans Joseph Land. A secular trend in gravity rate trend is estimated from the CSR release 5 GRACE data for the period of February 2003 to July 2013. Furthermore, long wavelength effects from distant large mass balance signals such as Greenland ice melting are filtered out. A new high-variance set of ice loading histories from calibrated glaciological modeling are used in the GIA modeling as it is found that ICE-5G over-estimates the observed GIA gravity change in the region. It is found that the rheology structure represented by VM5a results in over-estimation of the observed gravity change in the region for all ice sheet chronologies investigated. Therefore, other rheological Earth models were investigated. The best fitting upper mantle viscosity and elastic lithosphere thickness in the Barents Sea region are 4 (±0.5)*10^20 Pas and 110 (±20) km, respectively. The GRACE satellite mission proves to be a useful constraint in the Barents Sea Region for improving our knowledge on the upper mantle rheology.

  11. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  12. Natural migration rates of trees: Global terrestrial carbon cycle implications. Book chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, A.M.

    The paper discusses the forest-ecological processes which constrain the rate of response by forests to rapid future environmental change. It establishes a minimum response time by natural tree populations which invade alien landscapes and reach the status of a mature, closed canopy forest when maximum carbon storage is realized. It considers rare long-distance and frequent short-distance seed transport, seedling and tree establishment, sequential tree and stand maturation, and spread between newly established colonies.

  13. Thermochronology, Uplift and Erosion at the Australian-Pacific Plate Boundary Alpine Fault restraining bend, New Zealand

    NASA Astrophysics Data System (ADS)

    Sagar, M. W.; Seward, D.; Norton, K. P.

    2016-12-01

    The 650 km-long Australian-Pacific plate boundary Alpine Fault is remarkably straight at a regional scale, except for a prominent S-shaped bend in the northern South Island. This is a restraining bend and has been referred to as the `Big Bend' due to similarities with the Transverse Ranges section of the San Andreas Fault. The Alpine Fault is the main source of seismic hazard in the South Island, yet there are no constraints on slip rates at the Big Bend. Furthermore, the timing of Big Bend development is poorly constrained to the Miocene. To address these issues we are using the fission-track (FT) and 40Ar/39Ar thermochronometers, together with basin-averaged cosmogenic nuclide 10Be concentrations to constrain the onset and rate of Neogene-Quaternary exhumation of the Australian and Pacific plates at the Big Bend. Exhumation rates at the Big Bend are expected to be greater than those for adjoining sections of the Alpine Fault due to locally enhanced shortening. Apatite FT ages and modelled thermal histories indicate that exhumation of the Australian Plate had begun by 13 Ma and 3 km of exhumation has occurred since that time, requiring a minimum exhumation rate of 0.2 mm/year. In contrast, on the Pacific Plate, zircon FT cooling ages suggest ≥7 km of exhumation in the past 2-3 Ma, corresponding to a minimum exhumation rate of 2 mm/year. Preliminary assessment of stream channel gradients either side of the Big Bend suggests equilibrium between uplift and erosion. The implication of this is that Quaternary erosion rates estimated from 10Be concentrations will approximate uplift rates. These uplift rates will help to better constrain the dip-slip rate of the Alpine Fault, which will allow the National Seismic Hazard Model to be updated.

  14. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order.

    PubMed

    Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.

  15. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order

    PubMed Central

    Xia, Yangkun; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419

  16. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  17. Modular control of varied locomotor tasks in children with incomplete spinal cord injuries

    PubMed Central

    Tester, Nicole J.; Kautz, Steven A.; Howland, Dena R.; Clark, David J.; Garvan, Cyndi; Behrman, Andrea L.

    2013-01-01

    A module is a functional unit of the nervous system that specifies functionally relevant patterns of muscle activation. In adults, four to five modules account for muscle activation during walking. Neurological injury alters modular control and is associated with walking impairments. The effect of neurological injury on modular control in children is unknown and may differ from adults due to their immature and developing nervous systems. We examined modular control of locomotor tasks in children with incomplete spinal cord injuries (ISCIs) and control children. Five controls (8.6 ± 2.7 yr of age) and five children with ISCIs (8.6 ± 3.7 yr of age performed treadmill walking, overground walking, pedaling, supine lower extremity flexion/extension, stair climbing, and crawling. Electromyograms (EMGs) were recorded in bilateral leg muscles. Nonnegative matrix factorization was applied, and the minimum number of modules required to achieve 90% of the “variance accounted for” (VAF) was calculated. On average, 3.5 modules explained muscle activation in the controls, whereas 2.4 modules were required in the children with ISCIs. To determine if control is similar across tasks, the module weightings identified from treadmill walking were used to reconstruct the EMGs from each of the other tasks. This resulted in VAF values exceeding 86% for each child and each locomotor task. Our results suggest that 1) modularity is constrained in children with ISCIs and 2) for each child, similar neural control mechanisms are used across locomotor tasks. These findings suggest that interventions that activate the neuromuscular system to enhance walking also may influence the control of other locomotor tasks. PMID:23761702

  18. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  19. Future mission studies: Preliminary comparisons of solar flux models

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.

  20. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  1. Demodulation of messages received with low signal to noise ratio

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Quignon, T.; Romann, B.

    The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.

  2. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  3. Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system

    NASA Astrophysics Data System (ADS)

    Bai, Jianbo; Li, Yang; Chen, Jianhao

    2018-02-01

    The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.

  4. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  6. Lekking without a paradox in the buff-breasted sandpiper

    USGS Publications Warehouse

    Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.

    1997-01-01

    Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.

  7. Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming

    USGS Publications Warehouse

    Karlinger, M.R.; Skrivan, James A.

    1981-01-01

    Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)

  8. Aircrew coordination and decisionmaking: Peer ratings of video tapes made during a full mission simulation

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.; Awe, C. A.

    1986-01-01

    Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.

  9. Signal-dependent noise determines motor planning

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.; Wolpert, Daniel M.

    1998-08-01

    When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.

  10. A data assimilating model for estimating Southern Ocean biogeochemistry

    NASA Astrophysics Data System (ADS)

    Verdy, A.; Mazloff, M. R.

    2017-09-01

    A Biogeochemical Southern Ocean State Estimate (B-SOSE) is introduced that includes carbon and oxygen fields as well as nutrient cycles. The state estimate is constrained with observations while maintaining closed budgets and obeying dynamical and thermodynamic balances. Observations from profiling floats, shipboard data, underway measurements, and satellites are used for assimilation. The years 2008-2012 are chosen due to the relative abundance of oxygen observations from Argo floats during this time. The skill of the state estimate at fitting the data is assessed. The agreement is best for fields that are constrained with the most observations, such as surface pCO2 in Drake Passage (44% of the variance captured) and oxygen profiles (over 60% of the variance captured at 200 and 1000 m). The validity of adjoint method optimization for coupled physical-biogeochemical state estimation is demonstrated with a series of gradient check experiments. The method is shown to be mature and ready to synthesize in situ biogeochemical observations as they become more available. Documenting the B-SOSE configuration and diagnosing the strengths and weaknesses of the solution informs usage of this product as both a climate baseline and as a way to test hypotheses. Transport of Intermediate Waters across 32°S supplies significant amounts of nitrate to the Atlantic Ocean (5.57 ± 2.94 Tmol yr-1) and Indian Ocean (5.09 ± 3.06 Tmol yr-1), but much less nitrate reaches the Pacific Ocean (1.78 ± 1.91 Tmol yr-1). Estimates of air-sea carbon dioxide fluxes south of 50°S suggest a mean uptake of 0.18 Pg C/yr for the time period analyzed.

  11. Application of the LI-COR CO2 analyzer to volcanic plumes: a case study, volcán Popocatépetl, Mexico, June 7 and 10, 1995

    USGS Publications Warehouse

    Gerlach, T.M.; Delgado, H.; McGee, K.A.; Doukas, M.P.; Venegas, J.J.; Cardenas, L.

    1997-01-01

    Volcanic CO2 emission rate data are sparse despite their potential importance for constraining the role of magma degassing in the biogeochemical cycle of carbon and for assessing volcanic hazards. We used a LI-COR CO2 analyzer to determine volcanic CO2 emission rates by airborne measurements in volcanic plumes at Popocatépetl volcano on June 7 and 10, 1995. LI-COR sample paths of ∼72 m, compared with ∼1 km for the analyzer customarily used, together with fast Fourier transforms to remove instrument noise from raw data greatly improve resolution of volcanic CO2 anomalies. Parametric models fit to background CO2 provide a statistical tool for distinguishing volcanic from ambient CO2. Global Positioning System referenced flight traverses provide vastly improved data on the shape, coherence, and spatial distribution of volcanic CO2 in plume cross sections and contrast markedly with previous results based on traverse stacking. The continuous escape of CO2 and SO2 from Popocatépetl was fundamentally noneruptive and represented quiescent magma degassing from the top of a magma chamber ∼5 km deep. The average CO2 emission rate for January-June 1995 is estimated to be at least 6400 t d−1, one of the highest determined for a quiescently degassing volcano, although correction for downwind dispersion effects on volcanic CO2 indicates a higher rate of ∼9000 t d−1. Analysis of random errors indicates emission rates have 95% confidence intervals of ∼±20%, with uncertainty contributed mostly by wind speed variance, although the variance of plume cross-sectional areas during traversing is poorly constrained and possibly significant.

  12. Holocene constraints on simulated tropical Pacific climate

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K. M.; Carre, M.; Braconnot, P.; Leloup, J.; Zhou, Y.; Harrison, S. P.; Correge, T.; Mcgregor, H. V.; Collins, M.; Driscoll, R.; Elliot, M.; Schneider, B.; Tudhope, A. W.

    2015-12-01

    The El Niño-Southern Oscillation (ENSO) influences climate and weather worldwide, so uncertainties in its response to external forcings contribute to the spread in global climate projections. Theoretical and modeling studies have argued that such forcings may affect ENSO either via the seasonal cycle, the mean state, or extratropical influences, but these mechanisms are poorly constrained by the short instrumental record. Here we synthesize a pan-Pacific network of high-resolution marine biocarbonates spanning discrete snapshots of the Holocene (past 10, 000 years of Earth's history), which we use to constrain a set of global climate model (GCM) simulations via a forward model and a consistent treatment of uncertainty. Observations suggest important reductions in ENSO variability throughout the interval, most consistently during 3-5 kyBP, when approximately 2/3 reductions are inferred. The magnitude and timing of these ENSO variance reductions bear little resemblance to those sim- ulated by GCMs, or to equatorial insolation. The central Pacific witnessed a mid-Holocene increase in seasonality, at odds with the reductions simulated by GCMs. Finally, while GCM aggregate behavior shows a clear inverse relationship between seasonal amplitude and ENSO-band variance in sea-surface temperature, in agreement with many previous studies, such a relationship is not borne out by these observations. Our synthesis suggests that tropical Pacific climate is highly variable, but exhibited millennia-long periods of reduced ENSO variability whose origins, whether forced or unforced, contradict existing explanations. It also points to deficiencies in the ability of current GCMs to simulate forced changes in the tropical Pacific seasonal cycle and its interaction with ENSO, highlighting a key area of growth for future modeling efforts.

  13. Foraminifera Record the Good Years More than the Bad

    NASA Astrophysics Data System (ADS)

    Hull, P. M.

    2014-12-01

    Past ocean conditions are primarily discerned from geochemical and community-based analyses of fossilized taxa, each of which have unique environmental niches and dynamics. A key requirement of such paleoceanographic studies is that some unbiased or well-constrained record of the living ecosystem and climate is deposited on the sea floor and preserved through the post-depositional processes that act to distort them. It is widely known that foraminiferal species exhibit varying seasonal preferences and that seasonality is a key variable to account for in paleoceanographic reconstructions. However, on longer time scales (> year), it is generally assumed that species record the 'average' environmental conditions or typical variance (e.g., El Nino intensity) that existed in a given, time-averaged sediment sample. Here I examine planktonic foraminiferal population dynamics on yearly and longer time scales, in order to quantify their effect on paleoceanographic reconstructions. Using a previously published record of >250 years of population dynamics in the Santa Barbara Basin sediments, I find that the majority of individuals in a given species lived during a small subset of the total years (~15- 37% of years depending on the species). Populations of shallow, mixed layer species primarily represent the warmest, youngest years, while thermocline species primarily represent the cooler, older years. Importantly, the seasonality of species does not always predict their interannual dynamics. The general importance of long time-scale population dynamics on paleoceanographic reconstructions will also be considered in a theoretical model parameterized with temporally explicit species co-variances and temperature variability. Such modeling is needed to constrain the relative impact that a very good year can have on our interpretation of the 'average' of hundreds to thousands of years.

  14. Energy Requirements of Hydrogen-utilizing Microbes: A Boundary Condition for Subsurface Life

    NASA Technical Reports Server (NTRS)

    Hoehler, Tori M.; Alperin, Marc J.; Albert, Daniel B.; Martens, Christopher S.

    2003-01-01

    Microbial ecosystems based on the energy supplied by water-rock chemistry carry particular significance in the context of geo- and astrobiology. With no direct dependence on solar energy, lithotrophic microbes could conceivably penetrate a planetary crust to a depth limited only by temperature or pressure constraints (several kilometers or more). The deep lithospheric habitat is thereby potentially much greater in volume than its surface counterpart, and in addition offers a stable refuge against inhospitable surface conditions related to climatic or atmospheric evolution (e.g., Mars) or even high-energy impacts (e.g., early in Earth's history). The possibilities for a deep microbial biosphere are, however, greatly constrained by life s need to obtain energy at a certain minimum rate (the maintenance energy requirement) and of a certain minimum magnitude (the energy quantum requirement). The mere existence of these requirements implies that a significant fraction of the chemical free energy available in the subsurface environment cannot be exploited by life. Similar limits may also apply to the usefulness of light energy at very low intensities or long wavelengths. Quantification of these minimum energy requirements in terrestrial microbial ecosystems will help to establish a criterion of energetic habitability that can significantly constrain the prospects for life in Earth's subsurface, or on other bodies in the solar system. Our early work has focused on quantifying the biological energy quantum requirement for methanogenic archaea, as representatives of a plausible subsurface metabolism, in anoxic sediments (where energy availability is among the most limiting factors in microbial population growth). In both field and laboratory experiments utilizing these sediments, methanogens retain a remarkably consistent free energy intake, in the face of fluctuating environmental conditions that affect energy availability. The energy yields apparently required by methanogens in these sediment systems for sustained metabolism are about half that previously thought necessary. Lowered energy requirements would imply that a correspondingly greater proportion of the planetary subsurface could represent viable habitat for microorganisms.

  15. In the right place at the right time: habitat representation in protected areas of South American Nothofagus-dominated plants after a dispersal constrained climate change scenario.

    PubMed

    Alarcón, Diego; Cavieres, Lohengrin A

    2015-01-01

    In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions.

  16. In the Right Place at the Right Time: Habitat Representation in Protected Areas of South American Nothofagus-Dominated Plants after a Dispersal Constrained Climate Change Scenario

    PubMed Central

    Alarcón, Diego; Cavieres, Lohengrin A.

    2015-01-01

    In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions. PMID:25786226

  17. Call Admission Control on Single Node Networks under Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) Scheduler

    NASA Astrophysics Data System (ADS)

    Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi

    Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.

  18. Dynamical behaviors of structural, constrained and free water in calcium- and magnesium-silicate-hydrate gels

    DOE PAGES

    Le, Peisi; Fratini, Emiliano; Ito, Kanae; ...

    2016-01-28

    We present the hypothesis that the mechanical properties of cement pastes depend strongly on their porosities. In a saturated paste, the porosity links to the free water volume after hydration. Structural water, constrained water, and free water have different dynamical behavior. Hence, it should be possible to extract information on pore system by exploiting the water dynamics. With our experiments we investigated the slow dynamics of hydration water confined in calcium- and magnesium-silicate-hydrate (C-S-H and M-S-H) gels using high-resolution quasi-elastic neutron scattering (QENS) technique. C-S-H and M-S-H are the chemical binders present in calcium rich and magnesium rich cements. Wemore » measured three M-S-H samples: pure M-S-H, M-S-H with aluminum-silicate nanotubes (ASN), and M-S-H with carboxyl group functionalized ASN (ASN-COOH). A C-S-H sample with the same water content (i.e. 0.3) is also studied for comparison. We found that structural water in the gels contributes to the elastic component of the QENS spectrum, while constrained water and free water contribute the quasi-elastic component. The quantitative analysis suggests that the three components vary for different samples and indicate the variance in the system porosity, which controls the mechanical properties of cement pastes.« less

  19. Dynamical behaviors of structural, constrained and free water in calcium- and magnesium-silicate-hydrate gels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Peisi; Fratini, Emiliano; Ito, Kanae

    We present the hypothesis that the mechanical properties of cement pastes depend strongly on their porosities. In a saturated paste, the porosity links to the free water volume after hydration. Structural water, constrained water, and free water have different dynamical behavior. Hence, it should be possible to extract information on pore system by exploiting the water dynamics. With our experiments we investigated the slow dynamics of hydration water confined in calcium- and magnesium-silicate-hydrate (C-S-H and M-S-H) gels using high-resolution quasi-elastic neutron scattering (QENS) technique. C-S-H and M-S-H are the chemical binders present in calcium rich and magnesium rich cements. Wemore » measured three M-S-H samples: pure M-S-H, M-S-H with aluminum-silicate nanotubes (ASN), and M-S-H with carboxyl group functionalized ASN (ASN-COOH). A C-S-H sample with the same water content (i.e. 0.3) is also studied for comparison. We found that structural water in the gels contributes to the elastic component of the QENS spectrum, while constrained water and free water contribute the quasi-elastic component. The quantitative analysis suggests that the three components vary for different samples and indicate the variance in the system porosity, which controls the mechanical properties of cement pastes.« less

  20. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  1. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  2. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  3. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  4. FEMALE AND MALE GENETIC EFFECTS ON OFFSPRING PATERNITY: ADDITIVE GENETIC (CO)VARIANCES IN FEMALE EXTRA-PAIR REPRODUCTION AND MALE PATERNITY SUCCESS IN SONG SPARROWS (MELOSPIZA MELODIA)

    PubMed Central

    Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain

    2014-01-01

    Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically. PMID:24724612

  5. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  6. Null-space and statistical significance of first-arrival traveltime inversion

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.

    2004-03-01

    The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.

  7. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  8. How good is crude MDL for solving the bias-variance dilemma? An empirical investigation based on Bayesian networks.

    PubMed

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.

  9. How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

    PubMed Central

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204

  10. Estimation of stable boundary-layer height using variance processing of backscatter lidar data

    NASA Astrophysics Data System (ADS)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  11. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  12. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  13. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  14. ELUCID - Exploring the Local Universe with ReConstructed Initial Density Field III: Constrained Simulation in the SDSS Volume

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; Zhang, Youcai; Shi, JingJing; Jing, Y. P.; Liu, Chengze; Li, Shijie; Kang, Xi; Gao, Yang

    2016-11-01

    A method we developed recently for the reconstruction of the initial density field in the nearby universe is applied to the Sloan Digital Sky Survey Data Release 7. A high-resolution N-body constrained simulation (CS) of the reconstructed initial conditions, with 30723 particles evolved in a 500 {h}-1 {Mpc} box, is carried out and analyzed in terms of the statistical properties of the final density field and its relation with the distribution of Sloan Digital Sky Survey galaxies. We find that the statistical properties of the cosmic web and the halo populations are accurately reproduced in the CS. The galaxy density field is strongly correlated with the CS density field, with a bias that depends on both galaxy luminosity and color. Our further investigations show that the CS provides robust quantities describing the environments within which the observed galaxies and galaxy systems reside. Cosmic variance is greatly reduced in the CS so that the statistical uncertainties can be controlled effectively, even for samples of small volumes.

  15. On the scaling behavior of hardness with ligament diameter of nanoporous-Au: Constrained motion of dislocations along the ligaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viswanath, R. N.; Polaki, S. R.; Rajaraman, R.

    The scaling behavior of hardness with ligament diameter and vacancy defect concentration in nanoporous Au (np-Au) has been investigated using a combination of Vickers Hardness, Scanning electron microscopy, and positron lifetime measurements. It is shown that for np-Au, the hardness scales with the ligament diameter with an exponent of −0.3, that is, at variance with the conventional Hall-Petch exponent of −0.5 for bulk systems, as seen in the controlled experiments on cold worked Au with varying grain size. The hardness of np-Au correlates with the vacancy concentration C{sub V} within the ligaments, as estimated from positron lifetime experiments, and scalesmore » as C{sub V}{sup 1/2}, pointing to the interaction of dislocations with vacancies. The distinctive Hall-Petch exponent of −0.3 seen for np-Au, with ligament diameters in the range of 5–150 nm, is rationalized by invoking the constrained motion of dislocations along the ligaments.« less

  16. Influence of Layer Thickness, Raster Angle, Deformation Temperature and Recovery Temperature on the Shape-Memory Effect of 3D-Printed Polylactic Acid Samples

    PubMed Central

    Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji

    2017-01-01

    The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617

  17. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  18. Shared environmental influences on personality: A combined twin and adoption approach

    PubMed Central

    Matteson, Lindsay K.; McGue, Matt; Iacono, William G.

    2013-01-01

    In the past, shared environmental influences on personality traits have been found to be negligible in behavior genetic studies (e.g., Bouchard & McGue, 2003). However, most studies have been based on biometrical modeling of twins only. Failure to meet key assumptions of the classical twin design could lead to biased estimates of shared environmental effects. Alternative approaches to the etiology of personality are needed. In the current study we estimated the impact of shared environmental factors on adolescent personality by simultaneously modeling both twin and adoption data. We found evidence for significant shared environmental influences on Multidimensional Personality Questionnaire (MPQ) Absorption (15% variance explained), Alienation (10%), Harm Avoidance (14%), and Traditionalism (26%) scales. Additionally, we found that in most cases biometrical models constraining parameter estimates to be equal across study type (twins versus adoptees) fit no worse than models allowing these parameters to vary; this suggests that results converge across study design despite the potential (sometimes opposite) biases of twin and adoption studies. Thus, we can be more confident that our findings represent the true contribution of shared environmental variance to personality development. PMID:24065564

  19. Extending i-line capabilities through variance characterization and tool enhancement

    NASA Astrophysics Data System (ADS)

    Miller, Dan; Salinas, Adrian; Peterson, Joel; Vickers, David; Williams, Dan

    2006-03-01

    Continuous economic pressures have moved a large percent of integrated device manufacturing (IDM) operations either overseas or to foundry operations over the last 10 years. These pressures have left the IDM fabs in the U.S. with required COO improvements in order to maintain operations domestically. While the assets of many of these factories are at a very favorable point in the depreciation life cycle, the equipment and processes are constrained to the quality of the equipment in its original state and the degradation over its installed life. With the objective to enhance output and improve process performance, this factory and their primary lithography process tool supplier have been able to extend the usable life of the existing process tools, increase the output of the tool base, and improve the distribution of the CDs on the product produced. Texas Instruments Incorporated lead an investigation with the POLARIS ® Systems & Services business of FSI International to determine the sources of variance in the i-line processing of a wide array of IC device types. Data from the sources of variance were investigated such as PEB temp, PEB delay time, develop recipe, develop time, and develop programming. While PEB processes are a primary driver of acid catalyzed resists, the develop mode is shown in this work to have an overwhelming impact on the wafer to wafer and across wafer CD performance of these i-line processes. These changes have been able to improve the wafer to wafer CD distribution by more than 80 %, and the within wafer CD distribution by more than 50 % while enabling a greater than 50 % increase in lithography cluster throughput. The paper will discuss the contribution from each of the sources of variance and their importance in overall system performance.

  20. Branch xylem density variations across the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Patiño, S.; Lloyd, J.; Paiva, R.; Baker, T. R.; Quesada, C. A.; Mercado, L. M.; Schmerler, J.; Schwarz, M.; Santos, A. J. B.; Aguilar, A.; Czimczik, C. I.; Gallo, J.; Horna, V.; Hoyos, E. J.; Jimenez, E. M.; Palomino, W.; Peacock, J.; Peña-Cruz, A.; Sarmiento, C.; Sota, A.; Turriago, J. D.; Villanueva, B.; Vitzthum, P.; Alvarez, E.; Arroyo, L.; Baraloto, C.; Bonal, D.; Chave, J.; Costa, A. C. L.; Herrera, R.; Higuchi, N.; Killeen, T.; Leal, E.; Luizão, F.; Meir, P.; Monteagudo, A.; Neil, D.; Núñez-Vargas, P.; Peñuela, M. C.; Pitman, N.; Priante Filho, N.; Prieto, A.; Panfil, S. N.; Rudas, A.; Salomão, R.; Silva, N.; Silveira, M.; Soares Dealmeida, S.; Torres-Lezama, A.; Vásquez-Martínez, R.; Vieira, I.; Malhi, Y.; Phillips, O. L.

    2009-04-01

    Xylem density is a physical property of wood that varies between individuals, species and environments. It reflects the physiological strategies of trees that lead to growth, survival and reproduction. Measurements of branch xylem density, ρx, were made for 1653 trees representing 598 species, sampled from 87 sites across the Amazon basin. Measured values ranged from 218 kg m-3 for a Cordia sagotii (Boraginaceae) from Mountagne de Tortue, French Guiana to 1130 kg m-3 for an Aiouea sp. (Lauraceae) from Caxiuana, Central Pará, Brazil. Analysis of variance showed significant differences in average ρx across regions and sampled plots as well as significant differences between families, genera and species. A partitioning of the total variance in the dataset showed that species identity (family, genera and species) accounted for 33% with environment (geographic location and plot) accounting for an additional 26%; the remaining "residual" variance accounted for 41% of the total variance. Variations in plot means, were, however, not only accountable by differences in species composition because xylem density of the most widely distributed species in our dataset varied systematically from plot to plot. Thus, as well as having a genetic component, branch xylem density is a plastic trait that, for any given species, varies according to where the tree is growing in a predictable manner. Within the analysed taxa, exceptions to this general rule seem to be pioneer species belonging for example to the Urticaceae whose branch xylem density is more constrained than most species sampled in this study. These patterns of variation of branch xylem density across Amazonia suggest a large functional diversity amongst Amazonian trees which is not well understood.

  1. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  2. Low authority-threshold control for large flexible structures

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.

    1988-01-01

    An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.

  3. Detection of Ionospheric Alfven Resonator Signatures in the Equatorial Ionosphere

    NASA Technical Reports Server (NTRS)

    Simoes, Fernando; Klenzing, Jeffrey; Ivanov, Stoyan; Pfaff, Robert; Freudenreich, Henry; Bilitza, Dieter; Rowland, Douglas; Bromund, Kenneth; Liebrecht, Maria Carmen; Martin, Steven; hide

    2012-01-01

    The ionosphere response resulting from minimum solar activity during cycle 23/24 was unusual and offered unique opportunities for investigating space weather in the near-Earth environment. We report ultra low frequency electric field signatures related to the ionospheric Alfven resonator detected by the Communications/Navigation Outage Forecasting System (C/NOFS) satellite in the equatorial region. These signatures are used to constrain ionospheric empirical models and offer a new approach for monitoring ionosphere dynamics and space weather phenomena, namely aeronomy processes, Alfven wave propagation, and troposphere24 ionosphere-magnetosphere coupling mechanisms.

  4. Trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Mease, Kenneth D.; Vanburen, Mark A.

    1989-01-01

    The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.

  5. A quantitative method for evaluating inferior glenohumeral joint stiffness using ultrasonography.

    PubMed

    Tsai, Wen-Wei; Lee, Ming-Yih; Yeh, Wen-Lin; Cheng, Shih-Chung; Soon, Kok-Soon; Lei, Kin Fong; Lin, Wen-Yen

    2013-02-01

    Subluxation of the affected shoulder in post-stroke patients is associated with nerve disorders and muscle fatigue. Clinicians must be able to accurately and reliably measure inferior glenohumeral subluxation in patients to provide appropriate treatment. However, quantitative methods for evaluating the laxity and stiffness of the glenohumeral joint (GHJ) are still being developed. The aim of this study was to develop a new protocol for evaluating the laxity and stiffness of the inferior GHJ using ultrasonography under optimal testing conditions and to investigate changes in the GHJ from a commercially available humerus brace and shoulder brace. Multistage inferior displacement forces were applied to create a glide between the most cephalad point on the visible anterosuperior surface of the humeral head and coracoid process in seven healthy volunteers. GHJ stiffness was defined as the slope of the linear regression line between the glides and different testing loads. The testing conditions were defined by different test loading mechanisms (n=2), shoulder constraining conditions (n=2), and loading modes (n=4). The optimal testing condition was defined as the condition with the least residual variance of measured laxity to the calculated stiffness under different testing loads. A paired t-test was used to compare the laxity and stiffness of the inferior GHJ using different braces. No significant difference was identified between the two test loading mechanisms (t=0.218, p=0.831) and two shoulder constraining conditions (t=-0.235, p=0.818). We concluded that ultrasonographic laxity measurements performed using a pulley set loading mechanism was as reliable as direct loading. Additionally, constraining the unloaded shoulder was proposed due to the lower mean residual variance value. Moreover, pulling the elbow downward with loading on the upper arm was suggested, as pulling the elbow downward with the elbow flexed and loading on the forearm may overestimate stiffness and pain in the inferior GHJ at the loading point due to friction between the wide belt and skin. Furthermore, subjects wearing a humerus brace with a belt, which creates the effect of lifting the humerus toward the acromion, had greater GHJ stiffness compared to subjects wearing a shoulder brace without a belt to lift the humerus under the proposed testing conditions. This study provides experimental evidence that shoulder braces may reduce GHJ laxity under an external load, implying that the use of a humeral brace can prevent subluxation in post-stroke patients. The resulting optimal testing conditions for measuring the laxity and stiffness of the GHJ is to constrain the unloaded shoulder and bend the loaded arm at the elbow with loading on the upper arm using a pulley system. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Geochronological constraints on the evolution of El Hierro (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Becerril, Laura; Ubide, Teresa; Sudo, Masafumi; Martí, Joan; Galindo, Inés; Galé, Carlos; Morales, Jose María; Yepes, Jorge; Lago, Marceliano

    2016-01-01

    New age data have been obtained to time constrain the recent Quaternary volcanism of El Hierro (Canary Islands) and to estimate its recurrence rate. We have carried out 40Ar/39Ar geochronology on samples spanning the entire volcanostratigraphic sequence of the island and 14C geochronology on the most recent eruption on the northeast rift of the island: 2280 ± 30 yr BP. We combine the new absolute data with a revision of published ages onshore, some of which were identified through geomorphological criteria (relative data). We present a revised and updated chronology of volcanism for the last 33 ka that we use to estimate the maximum eruptive recurrence of the island. The number of events per year determined is 9.7 × 10-4 for the emerged part of the island, which means that, as a minimum, one eruption has occurred approximately every 1000 years. This highlights the need of more geochronological data to better constrain the eruptive recurrence of El Hierro.

  7. VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)

    NASA Astrophysics Data System (ADS)

    Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.

    2013-11-01

    We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).

  8. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  9. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  10. Law of the Minimum paradoxes.

    PubMed

    Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A

    2011-09-01

    The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010

  11. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    NASA Astrophysics Data System (ADS)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.

  12. Evaluating the Utility of Adjoint-based Inverse Modeling with Aircraft and Surface Measurements during ARCTAS-CARB to Constrain Wildfire Emissions of Black Carbon

    NASA Astrophysics Data System (ADS)

    Henze, D. K.; Guerrette, J.; Bousserez, N.

    2016-12-01

    Wildfires contribute significantly to regional haze events globally, and they are potentially becoming more commonplace with increasing droughts due to climate change. Aerosol emissions from wildfires are highly uncertain, with global annual totals varying by a factor of 2 to 3 and regional rates varying by up to a factor of 10. At the high resolution required to predict PM2.5 exposure events, this variance is attributable to differences in methodology, differing land cover datasets, spatial variation in fire locations, and limited understanding of fast transient fire behavior. Here we apply an adjoint-based online chemical inverse modeling tool, WRFDA-Chem, to constrain black carbon aerosol (BC) emissions from fires during the 2008 ARCTAS-CARB field campaign. We identify several weaknesses in the prior diurnal distribution of emissions, including a missing early morning emission peak associated with local, persistent, large-scale forest fires. On 22 June, 2008, aircraft observations are able to reduce the spread between FINNv1.0 and QFEDv2.4r8 from ×3.5 to ×2.1. On 23 and 24 June, the spread is reduced from ×3.4 to ×1.4. Using posterior error estimates, we found that emission variance improvements are limited to a small footprint surrounding the measurements. Relative BB emission variances are reduced by up to 35% near aircraft flight paths and up to 60% near IMPROVE surface sites. Due to the spatial variation of observations on multiple days, and the heterogeneous biomass burning errors on daily scales, cross-validation was not successful. Future high-resolution measurements need to be carefully planned to characterize biomass burning emission errors and control for day-to-day variation. In general, the 4D-Var inversion framework would benefit from reduced wall-time. For the problem presented, incremental 4D-Var requires 20 hours on 96 cores to reach practical optimization convergence and generate the posterior covariance matrix for a 24-hour assimilation window. We will present initial computational comparisons with a recently developed method to parallelize those calculations, which will reduce wall-time by a factor of 5 or more for all WRFDA 4D-Var applications.

  13. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  14. Dynamical Analysis of the Circumprimary Planet in the Eccentric Binary System HD 59686

    NASA Astrophysics Data System (ADS)

    Trifonov, Trifon; Lee, Man Hoi; Reffert, Sabine; Quirrenbach, Andreas

    2018-04-01

    We present a detailed orbital and stability analysis of the HD 59686 binary-star planet system. HD 59686 is a single-lined, moderately close (a B = 13.6 au) eccentric (e B = 0.73) binary, where the primary is an evolved K giant with mass M = 1.9 M ⊙ and the secondary is a star with a minimum mass of m B = 0.53 M ⊙. Additionally, on the basis of precise radial velocity (RV) data, a Jovian planet with a minimum mass of m p = 7 M Jup, orbiting the primary on a nearly circular S-type orbit with e p = 0.05 and a p = 1.09 au, has recently been announced. We investigate large sets of orbital fits consistent with HD 59686's RV data by applying bootstrap and systematic grid search techniques coupled with self-consistent dynamical fitting. We perform long-term dynamical integrations of these fits to constrain the permitted orbital configurations. We find that if the binary and the planet in this system have prograde and aligned coplanar orbits, there are narrow regions of stable orbital solutions locked in a secular apsidal alignment with the angle between the periapses, Δω, librating about 0°. We also test a large number of mutually inclined dynamical models in an attempt to constrain the three-dimensional orbital architecture. We find that for nearly coplanar and retrograde orbits with mutual inclination 145° ≲ Δi ≤ 180°, the system is fully stable for a large range of orbital solutions.

  15. Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.

    PubMed

    Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X

    2018-01-01

    To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  16. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  17. Unbiased estimation in seamless phase II/III trials with unequal treatment effect variances and hypothesis-driven selection rules.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-09-30

    Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  19. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  20. A Quantitative Microscopy Technique for Determining the Number of Specific Proteins in Cellular Compartments

    PubMed Central

    Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.

    2013-01-01

    This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731

  1. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  2. Information dynamics in carcinogenesis and tumor growth.

    PubMed

    Gatenby, Robert A; Frieden, B Roy

    2004-12-21

    The storage and transmission of information is vital to the function of normal and transformed cells. We use methods from information theory and Monte Carlo theory to analyze the role of information in carcinogenesis. Our analysis demonstrates that, during somatic evolution of the malignant phenotype, the accumulation of genomic mutations degrades intracellular information. However, the degradation is constrained by the Darwinian somatic ecology in which mutant clones proliferate only when the mutation confers a selective growth advantage. In that environment, genes that normally decrease cellular proliferation, such as tumor suppressor or differentiation genes, suffer maximum information degradation. Conversely, those that increase proliferation, such as oncogenes, are conserved or exhibit only gain of function mutations. These constraints shield most cellular populations from catastrophic mutator-induced loss of the transmembrane entropy gradient and, therefore, cell death. The dynamics of constrained information degradation during carcinogenesis cause the tumor genome to asymptotically approach a minimum information state that is manifested clinically as dedifferentiation and unconstrained proliferation. Extreme physical information (EPI) theory demonstrates that altered information flow from cancer cells to their environment will manifest in-vivo as power law tumor growth with an exponent of size 1.62. This prediction is based only on the assumption that tumor cells are at an absolute information minimum and are capable of "free field" growth that is, they are unconstrained by external biological parameters. The prediction agrees remarkably well with several studies demonstrating power law growth in small human breast cancers with an exponent of 1.72+/-0.24. This successful derivation of an analytic expression for cancer growth from EPI alone supports the conceptual model that carcinogenesis is a process of constrained information degradation and that malignant cells are minimum information systems. EPI theory also predicts that the estimated age of a clinically observed tumor is subject to a root-mean square error of about 30%. This is due to information loss and tissue disorganization and probably manifests as a randomly variable lag phase in the growth pattern that has been observed experimentally. This difference between tumor size and age may impose a fundamental limit on the efficacy of screening based on early detection of small tumors. Independent of the EPI analysis, Monte Carlo methods are applied to predict statistical tumor growth due to perturbed information flow from the environment into transformed cells. A "simplest" Monte Carlo model is suggested by the findings in the EPI approach that tumor growth arises out of a minimally complex mechanism. The outputs of large numbers of simulations show that (a) about 40% of the populations do not survive the first two-generations due to mutations in critical gene segments; but (b) those that do survive will experience power law growth identical to the predicted rate obtained from the independent EPI approach. The agreement between these two very different approaches to the problem strongly supports the idea that tumor cells regress to a state of minimum information during carcinogenesis, and that information dynamics are integrally related to tumor development and growth.

  3. Female and male genetic effects on offspring paternity: additive genetic (co)variances in female extra-pair reproduction and male paternity success in song sparrows (Melospiza melodia).

    PubMed

    Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain

    2014-08-01

    Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically. © 2014 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  4. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  5. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  6. Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation.

    PubMed

    Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping

    2016-02-01

    Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.

  7. Three-Point Correlations in the COBE DMR 2 Year Anisotropy Maps

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Banday, A. J.; Bennett, C. L.; Gorski, K. M.; Kogut, A.

    1995-01-01

    We compute the three-point temperature correlation function of the COBE Differential Microwave Radiometer (DMR) 2 year sky maps to search for evidence of non-Gaussian temperature fluctuations. We detect three-point correlations in our sky with a substantially higher signal-to-noise ratio than from the first-year data. However, the magnitude of the signal is consistent with the level of cosmic variance expected from Gaussian fluctuations, even when the low-order multipole moments, up to l = 9, are filtered from the data. These results do not strongly constrain most existing models of structure formation, but the absence of intrinsic three-point correlations on large angular scales is an important consistency test for such models.

  8. Modeling PSInSAR time series without phase unwrapping

    USGS Publications Warehouse

    Zhang, L.; Ding, X.; Lu, Z.

    2011-01-01

    In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variance-covariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data.

  9. Design for minimum energy in interstellar communication

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  10. Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2011-01-01

    Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.

  11. Effect of load introduction on graphite epoxy compression specimens

    NASA Technical Reports Server (NTRS)

    Reiss, R.; Yao, T. M.

    1981-01-01

    Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.

  12. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  13. Minimum impulse transfers to rotate the line of apsides

    NASA Technical Reports Server (NTRS)

    Phong, Connie; Sweetser, Theodore H.

    2005-01-01

    Transfer between two coplanar orbits can be accomplished via a single impulse if the two orbits intersect. Optimization of a single-impulse transfer, however, is not possible since the transfer orbit is completely constrained by the initial and final orbits. On the other hand, two-impulse transfers are possible between any two terminal orbits. While optimal scenarios are not known for the general two-impulse case, there are various approximate solutions to many special cases. We consider the problem of an inplane rotation of the line of apsides, leaving the size and shape of the orbit unaffected.

  14. On the functional optimization of a certain class of nonstationary spatial functions

    USGS Publications Warehouse

    Christakos, G.; Paraskevopoulos, P.N.

    1987-01-01

    Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.

  15. Cosmic 21 cm delensing of microwave background polarization and the minimum detectable energy scale of inflation.

    PubMed

    Sigurdson, Kris; Cooray, Asantha

    2005-11-18

    We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes.

  16. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  17. Global solar wind variations over the last four centuries.

    PubMed

    Owens, M J; Lockwood, M; Riley, P

    2017-01-31

    The most recent "grand minimum" of solar activity, the Maunder minimum (MM, 1650-1710), is of great interest both for understanding the solar dynamo and providing insight into possible future heliospheric conditions. Here, we use nearly 30 years of output from a data-constrained magnetohydrodynamic model of the solar corona to calibrate heliospheric reconstructions based solely on sunspot observations. Using these empirical relations, we produce the first quantitative estimate of global solar wind variations over the last 400 years. Relative to the modern era, the MM shows a factor 2 reduction in near-Earth heliospheric magnetic field strength and solar wind speed, and up to a factor 4 increase in solar wind Mach number. Thus solar wind energy input into the Earth's magnetosphere was reduced, resulting in a more Jupiter-like system, in agreement with the dearth of auroral reports from the time. The global heliosphere was both smaller and more symmetric under MM conditions, which has implications for the interpretation of cosmogenic radionuclide data and resulting total solar irradiance estimates during grand minima.

  18. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  19. A closed-form trim solution yielding minimum trim drag for airplanes with multiple longitudinal-control effectors

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.

    1989-01-01

    Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.

  20. Multidimensionally constrained relativistic mean-field study of triple-humped barriers in actinides

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Lu, Bing-Nan; Vretenar, Dario; Zhao, En-Guang; Zhou, Shan-Gui

    2015-01-01

    Background: Potential energy surfaces (PES's) of actinide nuclei are characterized by a two-humped barrier structure. At large deformations beyond the second barrier, the occurrence of a third barrier was predicted by macroscopic-microscopic model calculations in the 1970s, but contradictory results were later reported by a number of studies that used different methods. Purpose: Triple-humped barriers in actinide nuclei are investigated in the framework of covariant density functional theory (CDFT). Methods: Calculations are performed using the multidimensionally constrained relativistic mean field (MDC-RMF) model, with the nonlinear point-coupling functional PC-PK1 and the density-dependent meson exchange functional DD-ME2 in the particle-hole channel. Pairing correlations are treated in the BCS approximation with a separable pairing force of finite range. Results: Two-dimensional PES's of 226,228,230,232Th and 232,235,236,238U are mapped and the third minima on these surfaces are located. Then one-dimensional potential energy curves along the fission path are analyzed in detail and the energies of the second barrier, the third minimum, and the third barrier are determined. The functional DD-ME2 predicts the occurrence of a third barrier in all Th nuclei and 238U . The third minima in 230 ,232Th are very shallow, whereas those in 226 ,228Th and 238U are quite prominent. With the functional PC-PK1 a third barrier is found only in 226 ,228 ,230Th . Single-nucleon levels around the Fermi surface are analyzed in 226Th, and it is found that the formation of the third minimum is mainly due to the Z =90 proton energy gap at β20≈1.5 and β30≈0.7 . Conclusions: The possible occurrence of a third barrier on the PES's of actinide nuclei depends on the effective interaction used in multidimensional CDFT calculations. More pronounced minima are predicted by the DD-ME2 functional, as compared to the functional PC-PK1. The depth of the third well in Th isotopes decreases with increasing neutron number. The origin of the third minimum is due to the proton Z =90 shell gap at relevant deformations.

  1. Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.

    PubMed

    Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews

    2015-03-01

    This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.

  2. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  3. Deriving amplification factors from simple site parameters using generalized regression neural networks: implications for relevant site proxies

    NASA Astrophysics Data System (ADS)

    Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène

    2017-07-01

    Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other more convenient parameters are preferred, especially the couple ( {V_{{{s}30}} ,f0 } ) that leads to a variance reduction in at least 60%. From a code perspective, equations and plots are provided describing the dependence of the short- and mid-period amplification factors Fa and Fv on these two parameters. The robustness of the results is analyzed by performing a similar analysis for two alternative sets of velocity profiles, for which the bedrock velocity is constrained to have the same value for all velocity profiles, which is not the case in the original set.[Figure not available: see fulltext.

  4. Optical Design and Sensitivity of the Probe of Inflation and Cosmic Origins

    NASA Astrophysics Data System (ADS)

    Young, Karl S.; Hanany, Shaul; Wen, Qi

    2018-01-01

    The Probe of Inflation and Cosmic Origins (PICO) is a NASA probe-class mission concept being studied in preparation for the 2020 Astronomy and Astrophysics Decadal Survey. PICO will detect, or place new limits on, the energy scale of inflation and the physics of quantum gravity, determine the effective number of neutrino species and constrain the sum of neutrino masses, measure the optical depth to reionization to the cosmic variance limit, and shed new light on the role of magnetic fields in galactic evolution and star formation by making polarimetric maps of the full mm-wave sky with sensitivity 70 times higher than the Planck space mission. The maps made by PICO will provide a catalog of thousands of new proto clusters and infrared galaxies as well as tens of thousands of galaxy clusters which will further constrain cosmological parameters.PICO will have a 1.4 meter aperture telescope with 21 bands from 20 to 800 Ghz. We show the current PICO optics and discuss trade-offs between types of optical systems, limits imposed by scan strategies, and maximizing the number of detectors on sky. We present the instrument’s focal plane and the expected mission sensitivity.

  5. Waveform inversion of mantle Love waves: The born seismogram approach

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1983-01-01

    Normal mode theory, extended to the slightly laterally heterogeneous Earth by the first-order Born approximation, is applied to the waveform inversion of mantle Love waves (200-500 sec) for the Earth's lateral heterogeneity at l=2 and a spherically symmetric anelasticity (Q sub mu) structure. The data are from the Global Digital Seismograph Network (GDSN). The l=2 pattern is very similar to the results of other studies that used either different methods, such as phase velocity measurements and multiplet location measurements, or a different data set, such as mantle Rayleigh waves from different instruments. The results are carefully analyzed for variance reduction and are most naturally explained by heterogeneity in the upper 420 km. Because of the poor resolution of the data set for the deep interior, however, a fairly large heterogeneity in the transition zones, of the order of up to 3.5% in shear wave velocity, is allowed. It is noteworthy that Love waves of this period range can not constrain the structure below 420 km and thus any model presented by similar studies below this depth are likely to be constrained by Rayleigh waves (spheroidal modes) only.

  6. Waveform inversion of mantle Love waves - The Born seismogram approach

    NASA Technical Reports Server (NTRS)

    Tanimoto, T.

    1984-01-01

    Normal mode theory, extended to the slightly laterally heterogeneous earth by the first-order Born approximation, is applied to the waveform inversion of mantle Love waves (200-500 sec) for the earth's lateral heterogeneity at l = 2 and a spherically symmetric anelasticity (Q sub mu) structure. The data are from the Global Digital Seismograph Network (GDSN). The l = 2 pattern is very similar to the results of other studies that used either different methods, such as phase velocity measurements and multiplet location measurements, or a different data set, such as mantle Rayleigh waves from different instruments. The results are carefully analyzed for variance reduction and are most naturally explained by heterogeneity in the upper 420 km. Because of the poor resolution of the data set for the deep interior, however, a fairly large heterogeneity in the transition zones, of the order of up to 3.5 percent in shear wave velocity, is allowed. It is noteworthy that Love waves of this period range can not constrain the structure below 420 km and thus any model presented by similar studies below this depth are likely to be constrained by Rayleigh waves (spheroidal modes) only.

  7. Experimental Evaluation of the High-Speed Motion Vector Measurement by Combining Synthetic Aperture Array Processing with Constrained Least Square Method

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryouta; Yagi, Shin-ichi; Tamura, Kiyoshi; Sato, Masakazu

    2009-07-01

    Ultrahigh speed dynamic elastography has promising potential capabilities in applying clinical diagnosis and therapy of living soft tissues. In order to realize the ultrahigh speed motion tracking at speeds of over thousand frames per second, synthetic aperture (SA) array signal processing technology must be introduced. Furthermore, the overall system performance should overcome the fine quantitative evaluation in accuracy and variance of echo phase changes distributed across a tissue medium. On spatial evaluation of local phase changes caused by pulsed excitation on a tissue phantom, investigation was made with the proposed SA signal system utilizing different virtual point sources that were generated by an array transducer to probe each component of local tissue displacement vectors. The final results derived from the cross-correlation method (CCM) brought about almost the same performance as obtained by the constrained least square method (LSM) extended to successive echo frames. These frames were reconstructed by SA processing after the real-time acquisition triggered by the pulsed irradiation from a point source. The continuous behavior of spatial motion vectors demonstrated the dynamic generation and traveling of the pulsed shear wave at a speed of one thousand frames per second.

  8. Cosmology with CLASS

    NASA Astrophysics Data System (ADS)

    Watts, Duncan; CLASS Collaboration

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.

  9. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  10. An Observationally Constrained Evaluation of the Oxidative Capacity in the Tropical Western Pacific Troposphere

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; hide

    2016-01-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  11. Constraining the local variance of H {sub 0} from directional analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bengaly, C.A.P. Jr., E-mail: carlosap@on.br

    We evaluate the local variance of the Hubble Constant H {sub 0} with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H {sub 0} from standard candles ( H {sub 0} = 73.8±2.4 km s{sup -1} Mpc {sup -1}) with that of the Planck's Cosmic Microwave Background data ( H {sub 0} = 67.8 ± 0.9km s{sup -1} Mpc{sup -1}). We obtain that H {sub 0} ranges from 68.9±0.5 km s{sup -1} Mpc{sup -1}more » to 71.2±0.7 km s{sup -1} Mpc{sup -1} through the celestial sphere (1 σ uncertainty), implying a Hubble Constant maximal variance of δ H {sub 0} = (2.30±0.86) km s{sup -1} Mpc{sup -1} towards the ( l,b ) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H {sub 0} variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H {sub 0} value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H {sub 0} determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.« less

  12. Galaxy And Mass Assembly (GAMA): the 0.013 < z < 0.1 cosmic spectral energy distribution from 0.1 μm to 1 mm

    NASA Astrophysics Data System (ADS)

    Driver, S. P.; Robotham, A. S. G.; Kelvin, L.; Alpaslan, M.; Baldry, I. K.; Bamford, S. P.; Brough, S.; Brown, M.; Hopkins, A. M.; Liske, J.; Loveday, J.; Norberg, P.; Peacock, J. A.; Andrae, E.; Bland-Hawthorn, J.; Bourne, N.; Cameron, E.; Colless, M.; Conselice, C. J.; Croom, S. M.; Dunne, L.; Frenk, C. S.; Graham, Alister W.; Gunawardhana, M.; Hill, D. T.; Jones, D. H.; Kuijken, K.; Madore, B.; Nichol, R. C.; Parkinson, H. R.; Pimbblet, K. A.; Phillipps, S.; Popescu, C. C.; Prescott, M.; Seibert, M.; Sharp, R. G.; Sutherland, W. J.; Taylor, E. N.; Thomas, D.; Tuffs, R. J.; van Kampen, E.; Wijesinghe, D.; Wilkins, S.

    2012-12-01

    We use the Galaxy And Mass Assembly survey (GAMA) I data set combined with GALEX, Sloan Digital Sky Survey (SDSS) and UKIRT Infrared Deep Sky Survey (UKIDSS) imaging to construct the low-redshift (z < 0.1) galaxy luminosity functions in FUV, NUV, ugriz and YJHK bands from within a single well-constrained volume of 3.4 × 105 (Mpc h-1)3. The derived luminosity distributions are normalized to the SDSS data release 7 (DR7) main survey to reduce the estimated cosmic variance to the 5 per cent level. The data are used to construct the cosmic spectral energy distribution (CSED) from 0.1 to 2.1 μm free from any wavelength-dependent cosmic variance for both the elliptical and non-elliptical populations. The two populations exhibit dramatically different CSEDs as expected for a predominantly old and young population, respectively. Using the Driver et al. prescription for the azimuthally averaged photon escape fraction, the non-ellipticals are corrected for the impact of dust attenuation and the combined CSED constructed. The final results show that the Universe is currently generating (1.8 ± 0.3) × 1035 h W Mpc-3 of which (1.2 ± 0.1) × 1035 h W Mpc-3 is directly released into the inter-galactic medium and (0.6 ± 0.1) × 1035 h W Mpc-3 is reprocessed and reradiated by dust in the far-IR. Using the GAMA data and our dust model we predict the mid- and far-IR emission which agrees remarkably well with available data. We therefore provide a robust description of the pre- and post-dust attenuated energy output of the nearby Universe from 0.1 μm to 0.6 mm. The largest uncertainty in this measurement lies in the mid- and far-IR bands stemming from the dust attenuation correction and its currently poorly constrained dependence on environment, stellar mass and morphology.

  13. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China.

    PubMed

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-09-19

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.

  14. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  15. Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.

    PubMed

    Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V

    2014-12-01

    Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.

  16. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  17. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  18. A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.

    2012-01-01

    The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.

  19. Statistical indicators of collective behavior and functional clusters in gene networks of yeast

    NASA Astrophysics Data System (ADS)

    Živković, J.; Tadić, B.; Wick, N.; Thurner, S.

    2006-03-01

    We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.

  20. Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Phillips, Roger J.

    1991-01-01

    Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.

  1. Minimal Model of Prey Localization through the Lateral-Line System

    NASA Astrophysics Data System (ADS)

    Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo

    2003-10-01

    The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.

  2. Analysis of Doppler Lidar Data Acquired During the Pentagon Shield Field Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsom, Rob K.

    2011-04-14

    Observations from two coherent Doppler lidars deployed during the Pentagon Shield field campaign are analyzed in conjunction with other sensors to characterize the overall boundary-layer structure, and identify the dominant flow characteristics during the entire two-week field campaign. Convective boundary layer (CBL) heights and cloud base heights (CBH) are estimated from an analysis of the lidar signal-to-noise-ratio (SNR), and mean wind profiles are computed using a modified velocity-azimuth-display (VAD) algorithm. Three-dimensional wind field retrievals are computed from coordinated overlapping volume scans, and the results are analyzed by visualizing the flow in horizontal and vertical cross sections. The VAD winds showmore » that southerly flows dominate during the two-week field campaign. Low-level jets (LLJ) were evident on all but two of the nights during the field campaign. The LLJs tended to form a couple hours after sunset and reach maximum strength between 03 and 07 UTC. The surface friction velocities show distinct local maxima during four nights when strong LLJs formed. Estimates of the convective boundary layer height and residual layer height are obtained through an analysis of the vertical gradient of the lidar signal-to-noise-ratio (SNR). Strong minimum in the SNR gradient often develops just above the surface after sunrise. This minimum is associated with the developing CBL, and increases rapidly during the early portion of the daytime period. On several days, this minimum continues to increase until about sunset. Secondary minima in the SNR gradient were also observed at higher altitudes, and are believed to be remnants of the CBL height from previous days, i.e. the residual layer height. The dual-Doppler analysis technique used in this study makes use of hourly averaged radial velocity data to produce three-dimensional grids of the horizontal velocity components, and the horizontal velocity variance. Visualization of horizontal and vertical cross sections of the dual-Doppler wind retrievals often indicated a jet-like flow feature over the Potomac River under southerly flow conditions. This linear flow feature is roughly aligned with the Potomac River corridor to the south of the confluence with the Anatostia River, and is most apparent at low levels (i.e. below ~150 m MSL). It is believed that this flow arises due to reduced drag over the water surface and when the large scale flow aligns with the Potomac River corridor. A so-called area-constrained VAD analysis generally confirmed the observations from the dual-Doppler analysis. When the large scale flow is southerly, wind speeds over the Potomac River are consistently larger than the at a site just to the west of the river for altitudes less than 100 m MSL. Above this level, the trend is somewhat less obvious. The data suggest that the depth of the wind speed maximum may be reduced by strong directional shear aloft.« less

  3. Statistical evaluation of metal fill widths for emulated metal fill in parasitic extraction methodology

    NASA Astrophysics Data System (ADS)

    J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul

    2015-05-01

    In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.

  4. Claw length recommendations for dairy cow foot trimming

    PubMed Central

    Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.

    2015-01-01

    The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848

  5. An objective method to determine the probability distribution of the minimum apparent age of a sample of radio-isotopic dates

    NASA Astrophysics Data System (ADS)

    Ickert, R. B.; Mundil, R.

    2012-12-01

    Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5

  6. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  7. Pleistocene Thermocline Reconstruction and Oxygen Minimum Zone Evolution in the Maldives

    NASA Astrophysics Data System (ADS)

    Yu, S. M.; Wright, J.

    2017-12-01

    Drift deposits of the southern flank the Kardiva Channel in the eastern Inner Sea of the Maldives provide a complete record of Pleistocene water column changes in conjunction with monsoon cyclicity and fluctuations in the current system. We sampled IODP Site 359-U1467 to reconstruct water column using foraminiferal stable isotope records. This unlithified lithostratigraphic unit is rich in well-preserved microfossils and has an average sedimentation rate of 3.4 cm/yr. Marine Isotope Stages 1-6 were identified and show higher sedimentation rates during the interglacial sections approaching 6 cm/kyr. We present the δ13C and δ18O record of planktonic and benthic foraminiferal species taken at intervals of 3 cm. Globigerinoides ruber was used to constrain surface conditions. The thermocline dwelling species, Globorotalia menardii, was chosen to monitor fluctuations in the thermocline compared to the mixed layer. Lastly, the δ13C of the benthic species, Cibicidoides subhaidingerii and Planulina renzi, reveal changes to the bottom water ventilation and expansion of oxygen minimum zones over time. All three taxa recorded similar changes in δ18O over the glacial/interglacial cycles which is remarkable given the large sea level change ( 120 m) and the relatively shallow water depth ( 450 m). There is a small increase in the δ13C gradient during the glacial intervals which might reflect less ventilated bottom waters in the Inner Sea. This multispecies approach allows us to better constrain the thermocline hydrography and suggests that changes in the OMZ thickness are driven by the intensification of the monsoon cycles while painting a more cohesive picture to the changes in the water column structure.

  8. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  9. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  10. Assessment of the interactions between economic growth and industrial wastewater discharges using co-integration analysis: a case study for China's Hunan Province.

    PubMed

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-07-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China's Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth.

  11. Assessment of the Interactions between Economic Growth and Industrial Wastewater Discharges Using Co-integration Analysis: A Case Study for China’s Hunan Province

    PubMed Central

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-01-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China’s Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth. PMID:21845167

  12. Setting new constrains on the age of crustal-scale extensional shear zone (Vivero fault): implications for the evolution of Variscan orogeny in the Iberian massif

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Marcos, Alberto; Martínez, Francisco J.; Iriondo, Alexander; Llana-Fúnez, Sergio

    2015-06-01

    The Vivero fault is crustal-scale extensional shear zone parallel to the Variscan orogen in the Iberian massif belt with an associated dip-slip movement toward the hinterland. To constrain the timing of the extension accommodated by this structure, we performed zircon U-Pb LA-ICP-MS geochronology in several deformed plutons: some of them emplaced syntectonically. The different crystallization ages obtained indicate that the fault was active at least between 303 ± 2 and 287 ± 3 Ma, implying a minimum tectonic activity of 16 ± 5 Ma along the fault. The onset of the faulting is established to have occurred later than 314 ± 2 Ma. The geochronological data confirm that the Vivero fault postdates the main Variscan deformation events in the NW of the Iberian massif and that the extension direction of the Late Carboniferous-Early Permian crustal-scale extensional shear zones along the Ibero-Armorican Arc was consistently perpendicular to the general arcuate trend of the belt in SW Europe.

  13. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  14. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  15. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  16. Reduced probability of ice-free summers for 1.5 °C compared to 2 °C warming

    NASA Astrophysics Data System (ADS)

    Jahn, Alexandra

    2018-05-01

    Arctic sea ice has declined rapidly with increasing global temperatures. However, it is largely unknown how Arctic summer sea-ice impacts would vary under the 1.5 °C Paris target compared to scenarios with greater warming. Using the Community Earth System Model, I show that constraining warming to 1.5 °C rather than 2.0 °C reduces the probability of any summer ice-free conditions by 2100 from 100% to 30%. It also reduces the late-century probability of an ice cover below the 2012 record minimum from 98% to 55%. For warming above 2 °C, frequent ice-free conditions can be expected, potentially for several months per year. Although sea-ice loss is generally reversible for decreasing temperatures, sea ice will only recover to current conditions if atmospheric CO2 is reduced below present-day concentrations. Due to model biases, these results provide a lower bound on summer sea-ice impacts, but clearly demonstrate the benefits of constraining warming to 1.5 °C.

  17. Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification

    NASA Astrophysics Data System (ADS)

    He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang

    2018-04-01

    Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.

  18. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Symbol-string sensitivity and adult performance in lexical decision.

    PubMed

    Pammer, Kristen; Lavis, Ruth; Cooper, Charity; Hansen, Peter C; Cornelissen, Piers L

    2005-09-01

    In this study of adult readers, we used a symbol-string task to assess participants' sensitivity to the position of briefly presented, non-alphabetic but letter-like symbols. We found that sensitivity in this task explained a significant proportion of sample variance in visual lexical decision. Based on a number of controls, we show that this relationship cannot be explained by other factors including: chronological age, intelligence, speed of processing and/or concentration, short term memory consolidation, or fixation stability. This approach represents a new way to elucidate how, and to what extent, individual variation in pre-orthographic visual and cognitive processes impinge on reading skills, and the results suggest that limitations set by visuo-spatial processes constrain visual word recognition.

  20. Curvature of the freeze-out line in heavy ion collisions

    DOE PAGES

    Bazavov, A.; Ding, H. -T.; Hegde, P.; ...

    2016-01-28

    Here, we calculate the mean and variance of net-baryon number and net-electric charge distributions from quantum chromodynamics (QCD) using a next-to-leading order Taylor expansion in terms of temperature and chemical potentials. Moreover, these expansions with experimental data from STAR and PHENIX are compared, we determine the freeze-out temperature in the limit of vanishing baryon chemical potential, and, for the first time, constrain the curvature of the freeze-out line through a direct comparison between experimental data on net-charge fluctuations and a QCD calculation. We obtain a bound on the curvature coefficient, κmore » $^f$$_2$$<0.011, that is compatible with lattice QCD results on the curvature of the QCD transition line.« less

  1. Nature of Fluctuations on Directional Discontinuities Inside a Solar Ejection: Wind and IMP 8 Observations

    NASA Technical Reports Server (NTRS)

    Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)

    2001-01-01

    A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.

  2. Hybridisations of Variable Neighbourhood Search and Modified Simplex Elements to Harmony Search and Shuffled Frog Leaping Algorithms for Process Optimisations

    NASA Astrophysics Data System (ADS)

    Aungkulanon, P.; Luangpaiboon, P.

    2010-10-01

    Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.

  3. Benefits of incorporating spatial organisation of catchments for a semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Schumann, Andreas; Oppel, Henning

    2017-04-01

    To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark model setups (lumped and semi-distributed by common approaches) to address the benefits for different time and spatial scales. Moreover, the benefits for calibration effort, model performance in validation periods and process extrapolation are shown.

  4. Study of Venus' cloud layers by polarimetry using SPICAV/VEx

    NASA Astrophysics Data System (ADS)

    Rossi, Loïc; Marcq, Emmanuel; Montmessin, Franck; Bertaux, Jean-Loup; Korablev, Oleg; Fedorova, Anna

    2013-04-01

    The study of Venus's cloud layers is important in order to understand the structure, radiative balance and dynamics of the Venusian atmosphere. The main cloud layers between 50 and 70km are thought to consist in ~ 1μm radius droplets of a H2SO4-H2O solution. Nevertheless, the composition and the size distribution of the droplets are difficult to constrain more precisely. The polarization measurements have given great results in the determination of the constituents of the haze. In the early 1980s, Kawabata et al.(1980) used the polarization data from the OCPP instrument on the spacecraft Pioneer Venus to constrain the properties of the haze. They obtained a refractive index of 1.45 ± 0.04 at ? = 550nm and an effective radius of 0.23 ± 0.04μm, with a normalized size distribution variance of 0.18 ± 0.1. Our work aims to reproduce the method used by Kawabata et al. by writing a Lorentz-Mie scattering model and apply it to the so far unexploited polarization data of the SPICAV-IR instrument on-board ESA's Venus Express in order to better constrain haze and cloud particles at the top of Venus's clouds, as well as their spatial and temporal variability. We introduce here the model we developed, based on the BH-MIE scattering model. Taking into account the same size distribution of droplets as Kawabata et al., we obtained the polarization degree after a single Mie scattering by a haze at all phase angles given the effective radius and variance of the distribution and the refractive index of the droplets. Our model seems consistent as it reproduces the polarization degree modeled by Kawabata et al. We also present the first application of our model to the SPICAV-IR data under the single scattering assumption. Hence we can confirm the mean constraints on the size and refractive index of the haze and cloud droplets. In the near future, we then aim to extend our study of the polarization data by integrating our model into a radiative transfer model which will take into account the multiple scattering. Having more recent observations in wavelengths ranging from 650 to 1625nm, will put better constraints on the properties of both cloud and haze particles, with a primary focus on the cloud droplets characterization. Bibliography: BOHREN, C. F. AND HUMAN, D.R., in Absorption and Scattering of light by small particles, Wiley, 1983 KAWABATA, K. et al., Cloud and haze properties from Pioneer Venus Polarimetry, JGR, 1980

  5. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  6. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  7. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  8. Spectrum and orbit conservation as a factor in future mobile satellite system design

    NASA Technical Reports Server (NTRS)

    Bowen, Robert R.

    1990-01-01

    Access to the radio spectrum and geostationary orbit is essential to current and future mobile satellite systems. This access is difficult to obtain for current systems, and may be even more so for larger future systems. In this environment, satellite systems that minimize the amount of spectrum orbit resource required to meet a specific traffic requirement are essential. Several spectrum conservation techniques are discussed, some of which are complementary to designing the system at minimum cost. All may need to be implemented to the limits of technological feasibility if network growth is not to be constrained because of the lack of available spectrum-orbit resource.

  9. Pulsar statistics and their interpretations

    NASA Technical Reports Server (NTRS)

    Arnett, W. D.; Lerche, I.

    1981-01-01

    It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.

  10. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  11. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  12. Ni-Mn-Ga shape memory nanoactuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohl, M., E-mail: manfred.kohl@kit.edu; Schmitt, M.; Krevet, B.

    2014-01-27

    To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.

  13. Ni-Mn-Ga shape memory nanoactuation

    NASA Astrophysics Data System (ADS)

    Kohl, M.; Schmitt, M.; Backen, A.; Schultz, L.; Krevet, B.; Fähler, S.

    2014-01-01

    To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.

  14. Hydrogen Burning in Low Mass Stars Constrains Scalar-Tensor Theories of Gravity.

    PubMed

    Sakstein, Jeremy

    2015-11-13

    The most general scalar-tensor theories of gravity predict a weakening of the gravitational force inside astrophysical bodies. There is a minimum mass for hydrogen burning in stars that is set by the interplay of plasma physics and the theory of gravity. We calculate this for alternative theories of gravity and find that it is always significantly larger than the general relativity prediction. The observation of several low mass red dwarf stars therefore rules out a large class of scalar-tensor gravity theories and places strong constraints on the cosmological parameters appearing in the effective field theory of dark energy.

  15. Brillouin Frequency Shift of Fiber Distributed Sensors Extracted from Noisy Signals by Quadratic Fitting.

    PubMed

    Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen

    2018-01-31

    It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.

  16. Constraining external reverse shock physics of gamma-ray bursts from ROTSE-III limits

    NASA Astrophysics Data System (ADS)

    Cui, Xiao-Hong; Zou, Yuan-Chuan; Wei, Jun-Jie; Zheng, Wei-Kang; Wu, Xue-Feng

    2018-02-01

    Assuming that early optical emission is dominated by external reverse shock (RS) in the standard model of gamma-ray bursts (GRBs), we intend to constrain RS models with an initial Lorentz factor Γ0 of the outflows based on the ROTSE-III observations. We consider two cases of RS behaviour: relativistic shock and non-relativistic shock. For a homogeneous interstellar medium (ISM) and the wind circum-burst environment, constraints can be achieved by the fact that the peak flux Fν at the RS crossing time should be lower than the observed upper limit Fν, limit. We consider the different spectral regimes in which the observed optical frequency νopt may locate, which are divided by the orders for the minimum synchrotron frequency νm and the cooling frequency νc. Considering the homogeneous and wind environments around GRBs, we find that the relativistic RS case can be constrained by the (upper and lower) limits of Γ0 in a large range from about hundreds to thousands for 36 GRBs reported by ROTSE-III. Constraints on the non-relativistic RS case are achieved with limits of Γ0 ranging from ∼30 to ∼350 for 26 bursts. The lower limits of Γ0 achieved for the relativistic RS model are disfavored based on the previously discovered correlation between the initial Lorentz factor Γ0 and the isotropic gamma-ray energy Eγ, iso released in the prompt phase.

  17. How CMB and large-scale structure constrain chameleon interacting dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less

  18. The conformational preferences of γ-lactam and its role in constraining peptide structure

    NASA Astrophysics Data System (ADS)

    Paul, P. K. C.; Burney, P. A.; Campbell, M. M.; Osguthorpe, D. J.

    1990-09-01

    The conformational constraints imposed by γ-lactams in peptides have been studied using valence force field energy calculations and flexible geometry maps. It has been found that while cyclisation restrains the Ψ of the lactam, non-bonded interactions contribute to the constraints on ϕ of the lactam. The γ-lactam also affects the (ϕ,Ψ) of the residue after it in a peptide sequence. For an l-lactam, the ring geometry restricts Ψ to about-120°, and ϕ has two minima, the lowest energy around-140° and a higher minimum (5 kcal/mol higher) at 60°, making an l-γ-lactam more favourably accommodated in a near extended conformation than in position 2 of a type II' β-turn. The energy of the ϕ˜+60° minimum can be lowered substantially until it is more favoured than the-140° minimum by progressive substitution of bulkier groups on the amide N of the l-γ-lactam. The (ϕ,Ψ) maps of the residue succeeding a γ-lactam show subtle differences from those of standard N-methylated residues. The dependence of the constraints on the chirality of γ-lactams and N-substituted γ-lactams, in terms of the formation of secondary structures like β-turns is discussed and the comparison of the theoretical conformations with experimental results is highlighted.

  19. Sleep and nutritional deprivation and performance of house officers.

    PubMed

    Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R

    1985-07-01

    A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.

  20. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  1. A Multipath Mitigation Algorithm for vehicle with Smart Antenna

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang

    2018-01-01

    In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.

  2. What determines the direction of minimum variance of the magnetic field fluctuations in the solar wind?

    NASA Technical Reports Server (NTRS)

    Grappin, R.; Velli, M.

    1995-01-01

    The solar wind is not an isotropic medium; two symmetry axis are provided, first the radial direction (because the mean wind is radial) and second the spiral direction of the mean magnetic field, which depends on heliocentric distance. Observations show very different anisotropy directions, depending on the frequency waveband; while the large-scale velocity fluctuations are essentially radial, the smaller scale magnetic field fluctuations are mostly perpendicular to the mean field direction, which is not the expected linear (WkB) result. We attempt to explain how these properties are related, with the help of numerical simulations.

  3. Transit timing variations for planets co-orbiting in the horseshoe regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vokrouhlický, David; Nesvorný, David, E-mail: vokrouhl@cesnet.cz, E-mail: davidn@boulder.swri.edu

    2014-08-10

    Although not yet detected, pairs of exoplanets in 1:1 mean motion resonance probably exist. Low eccentricity, near-planar orbits, which in the comoving frame follow horseshoe trajectories, are one of the possible stable configurations. Here we study transit timing variations (TTVs) produced by mutual gravitational interaction of planets in this orbital architecture, with the goal to develop methods that can be used to recognize this case in observational data. In particular, we use a semi-analytic model to derive parametric constraints that should facilitate data analysis. We show that characteristic traits of the TTVs can directly constrain the (1) ratio of planetarymore » masses and (2) their total mass (divided by that of the central star) as a function of the minimum angular separation as seen from the star. In an ideal case, when transits of both planets are observed and well characterized, the minimum angular separation can also be inferred from the data. As a result, parameters derived from the observed transit timing series alone can directly provide both planetary masses scaled to the central star mass.« less

  4. Improved Limits on Scattering of Weakly Interacting Massive Particles from Reanalysis of 2013 LUX Data.

    PubMed

    Akerib, D S; Araújo, H M; Bai, X; Bailey, A J; Balajthy, J; Beltrame, P; Bernard, E P; Bernstein, A; Biesiadzinski, T P; Boulton, E M; Bradley, A; Bramante, R; Cahn, S B; Carmona-Benitez, M C; Chan, C; Chapman, J J; Chiller, A A; Chiller, C; Currie, A; Cutter, J E; Davison, T J R; de Viveiros, L; Dobi, A; Dobson, J E Y; Druszkiewicz, E; Edwards, B N; Faham, C H; Fiorucci, S; Gaitskell, R J; Gehman, V M; Ghag, C; Gibson, K R; Gilchriese, M G D; Hall, C R; Hanhardt, M; Haselschwardt, S J; Hertel, S A; Hogan, D P; Horn, M; Huang, D Q; Ignarra, C M; Ihm, M; Jacobsen, R G; Ji, W; Kazkaz, K; Khaitan, D; Knoche, R; Larsen, N A; Lee, C; Lenardo, B G; Lesko, K T; Lindote, A; Lopes, M I; Malling, D C; Manalaysay, A; Mannino, R L; Marzioni, M F; McKinsey, D N; Mei, D-M; Mock, J; Moongweluwan, M; Morad, J A; Murphy, A St J; Nehrkorn, C; Nelson, H N; Neves, F; O'Sullivan, K; Oliver-Mallory, K C; Ott, R A; Palladino, K J; Pangilinan, M; Pease, E K; Phelps, P; Reichhart, L; Rhyne, C; Shaw, S; Shutt, T A; Silva, C; Solovov, V N; Sorensen, P; Stephenson, S; Sumner, T J; Szydagis, M; Taylor, D J; Taylor, W; Tennyson, B P; Terman, P A; Tiedt, D R; To, W H; Tripathi, M; Tvrznikova, L; Uvarov, S; Verbus, J R; Webb, R C; White, J T; Whitis, T J; Witherell, M S; Wolfs, F L H; Yazdani, K; Young, S K; Zhang, C

    2016-04-22

    We present constraints on weakly interacting massive particles (WIMP)-nucleus scattering from the 2013 data of the Large Underground Xenon dark matter experiment, including 1.4×10^{4}  kg day of search exposure. This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV. Sensitivity, especially to low-mass WIMPs, is enhanced compared to our previous results which modeled the signal only above a 3 keV minimum energy. Under standard dark matter halo assumptions and in the mass range above 4  GeV c^{-2}, these new results give the most stringent direct limits on the spin-independent WIMP-nucleon cross section. The 90% C.L. upper limit has a minimum of 0.6 zb at 33  GeV c^{-2} WIMP mass.

  5. Numerical Computation of Homogeneous Slope Stability

    PubMed Central

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927

  6. Numerical computation of homogeneous slope stability.

    PubMed

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  7. Improved limits on scattering of weakly interacting massive particles from reanalysis of 2013 LUX data

    DOE PAGES

    Akerib, D. S.

    2016-04-20

    Here, we present constraints on weakly interacting massive particles (WIMP)-nucleus scattering from the 2013 data of the Large Underground Xenon dark matter experiment, including 1.4 × 10 4 kg day of search exposure. This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV. Sensitivity, especially to low-mass WIMPs, is enhanced compared to our previous results which modeled themore » signal only above a 3 keV minimum energy. Under standard dark matter halo assumptions and in the mass range above 4 GeV c –2, these new results give the most stringent direct limits on the spin-independent WIMP-nucleon cross section. The 90% C.L. upper limit has a minimum of 0.6 zb at 33 GeV c –2 WIMP mass.« less

  8. Improved Limits on Scattering of Weakly Interacting Massive Particles from Reanalysis of 2013 LUX Data

    NASA Astrophysics Data System (ADS)

    Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bradley, A.; Bramante, R.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chapman, J. J.; Chiller, A. A.; Chiller, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; de Viveiros, L.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Malling, D. C.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Ott, R. A.; Palladino, K. J.; Pangilinan, M.; Pease, E. K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Yazdani, K.; Young, S. K.; Zhang, C.; LUX Collaboration

    2016-04-01

    We present constraints on weakly interacting massive particles (WIMP)-nucleus scattering from the 2013 data of the Large Underground Xenon dark matter experiment, including 1.4 ×104 kg day of search exposure. This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV. Sensitivity, especially to low-mass WIMPs, is enhanced compared to our previous results which modeled the signal only above a 3 keV minimum energy. Under standard dark matter halo assumptions and in the mass range above 4 GeV c-2 , these new results give the most stringent direct limits on the spin-independent WIMP-nucleon cross section. The 90% C.L. upper limit has a minimum of 0.6 zb at 33 GeV c-2 WIMP mass.

  9. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  10. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  11. Accreting Binary Populations in the Earlier Universe

    NASA Technical Reports Server (NTRS)

    Hornschemeier, Ann

    2010-01-01

    It is now understood that X-ray binaries dominate the hard X-ray emission from normal star-forming galaxies. Thanks to the deepest (2-4 Ms) Chandra surveys, such galaxies are now being studied in X-rays out to z approximates 4. Interesting X-ray stacking results (based on 30+ galaxies per redshift bin) suggest that the mean rest-frame 2-10 keV luminosity from z=3-4 Lyman break galaxies (LBGs), is comparable to the most powerful starburst galaxies in the local Universe. This result possibly indicates a similar production mechanism for accreting binaries over large cosmological timescales. To understand and constrain better the production of X-ray binaries in high-redshift LBGs, we have utilized XMM-Newton observations of a small sample of z approximates 0.1 GALEX-selected Ultraviolet-Luminous Galaxies (UVLGs); local analogs to high-redshift LBGs. Our observations enable us to study the X-ray emission from LBG-like galaxies on an individual basis, thus allowing us to constrain object-to-object variances in this population. We supplement these results with X-ray stacking constraints using the new 3.2 Ms Chandra Deep Field-South (completed spring 2010) and LBG candidates selected from HST, Swift UVOT, and ground-based data. These measurements provide new X-ray constraints that sample well the entire z=0-4 baseline

  12. Constrained clusters of gene expression profiles with pathological features.

    PubMed

    Sese, Jun; Kurokawa, Yukinori; Monden, Morito; Kato, Kikuya; Morishita, Shinichi

    2004-11-22

    Gene expression profiles should be useful in distinguishing variations in disease, since they reflect accurately the status of cells. The primary clustering of gene expression reveals the genotypes that are responsible for the proximity of members within each cluster, while further clustering elucidates the pathological features of the individual members of each cluster. However, since the first clustering process and the second classification step, in which the features are associated with clusters, are performed independently, the initial set of clusters may omit genes that are associated with pathologically meaningful features. Therefore, it is important to devise a way of identifying gene expression clusters that are associated with pathological features. We present the novel technique of 'itemset constrained clustering' (IC-Clustering), which computes the optimal cluster that maximizes the interclass variance of gene expression between groups, which are divided according to the restriction that only divisions that can be expressed using common features are allowed. This constraint automatically labels each cluster with a set of pathological features which characterize that cluster. When applied to liver cancer datasets, IC-Clustering revealed informative gene expression clusters, which could be annotated with various pathological features, such as 'tumor' and 'man', or 'except tumor' and 'normal liver function'. In contrast, the k-means method overlooked these clusters.

  13. The interrelationship of the thorax and pelvis under varying task constraints.

    PubMed

    Delphinus, Elias M; Sayers, Mark Gregory Leigh

    2013-01-01

    The purpose of this study was to investigate the interrelationship between the thorax and pelvis during coupled movement patterns. Fifty-seven participants were assessed using an infrared motion analysis system to track trunk movement during maximal pelvis and thorax rotations over four trunk inclinations and two pelvic constraint conditions. A repeated-measures multivariate analysis of variance investigated the effects of forward trunk inclination and pelvic constraint on thorax and pelvic rotation. Forward trunk inclination from neutral to 45° resulted in a 46% (p < 0.001) decrease in axial pelvic rotation and a 15% (p < 0.001) decrease in axial thorax rotation with an unconstrained pelvis. A constrained pelvis resulted in a 15% (p < 0.001) decrease in axial thorax rotation. An externally constrained pelvis allowed the thorax to achieve an average of 18° (SD = 2°) greater rotational range of motion across all angles. This study reinforced the importance of allowing the pelvis to rotate during whole body axial rotation tasks. Results indicated that maximum axial trunk rotation is best achieved in a neutral posture, when the pelvis is allowed to contribute and flexion at the hips should be minimised. For example, if a recumbent task requires rotation of the torso, then the chair seat should be allowed to swivel.

  14. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  15. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  16. Multiple regression analysis of anthropometric measurements influencing the cephalic index of male Japanese university students.

    PubMed

    Hossain, Md Golam; Saw, Aik; Alam, Rashidul; Ohtsuki, Fumio; Kamarul, Tunku

    2013-09-01

    Cephalic index (CI), the ratio of head breadth to head length, is widely used to categorise human populations. The aim of this study was to access the impact of anthropometric measurements on the CI of male Japanese university students. This study included 1,215 male university students from Tokyo and Kyoto, selected using convenient sampling. Multiple regression analysis was used to determine the effect of anthropometric measurements on CI. The variance inflation factor (VIF) showed no evidence of a multicollinearity problem among independent variables. The coefficients of the regression line demonstrated a significant positive relationship between CI and minimum frontal breadth (p < 0.01), bizygomatic breadth (p < 0.01) and head height (p < 0.05), and a negative relationship between CI and morphological facial height (p < 0.01) and head circumference (p < 0.01). Moreover, the coefficient and odds ratio of logistic regression analysis showed a greater likelihood for minimum frontal breadth (p < 0.01) and bizygomatic breadth (p < 0.01) to predict round-headedness, and morphological facial height (p < 0.05) and head circumference (p < 0.01) to predict long-headedness. Stepwise regression analysis revealed bizygomatic breadth, head circumference, minimum frontal breadth, head height and morphological facial height to be the best predictor craniofacial measurements with respect to CI. The results suggest that most of the variables considered in this study appear to influence the CI of adult male Japanese students.

  17. Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem

    USGS Publications Warehouse

    Schneider, Davod C.; Piatt, John F.

    1986-01-01

    The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.

  18. The evolutionary stability of cross-sex, cross-trait genetic covariances.

    PubMed

    Gosden, Thomas P; Chenoweth, Stephen F

    2014-06-01

    Although knowledge of the selective agents behind the evolution of sexual dimorphism has advanced considerably in recent years, we still lack a clear understanding of the evolutionary durability of cross-sex genetic covariances that often constrain its evolution. We tested the relative stability of cross-sex genetic covariances for a suite of homologous contact pheromones of the fruit fly Drosophila serrata, along a latitudinal gradient where these traits have diverged in mean. Using a Bayesian framework, which allowed us to account for uncertainty in all parameter estimates, we compared divergence in the total amount and orientation of genetic variance across populations, finding divergence in orientation but not total variance. We then statistically compared orientation divergence of within-sex (G) to cross-sex (B) covariance matrices. In line with a previous theoretical prediction, we find that the cross-sex covariance matrix, B, is more variable than either within-sex G matrix. Decomposition of B matrices into their symmetrical and nonsymmetrical components revealed that instability is linked to the degree of asymmetry. We also find that the degree of asymmetry correlates with latitude suggesting a role for spatially varying natural selection in shaping genetic constraints on the evolution of sexual dimorphism. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  19. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    NASA Astrophysics Data System (ADS)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  20. Solar Control of Earth's Ionosphere: Observations from Solar Cycle 23

    NASA Astrophysics Data System (ADS)

    Doe, R. A.; Thayer, J. P.; Solomon, S. C.

    2005-05-01

    A nine year database of sunlit E-region electron density altitude profiles (Ne(z)) measured by the Sondrestrom ISR has been partitioned over a 30-bin parameter space of averaged 10.7 cm solar radio flux (F10.7) and solar zenith angle (χ) to investigate long-term solar and thermospheric variability, and to validate contemporary EUV photoionization models. A two stage filter, based on rejection of Ne(z) profiles with large Hall to Pedersen ratio, is used to minimize auroral contamination. Resultant filtered mean Ne(z) compares favorably with subauroral Ne measured for the same F10.7 and χ conditions at the Millstone Hill ISR. Mean Ne, as expected, increases with solar activity and decreases with large χ, and the variance around mean Ne is shown to be greatest at low F10.7 (solar minimum). ISR-derived mean Ne is compared with two EUV models: (1) a simple model without photoelectrons and based on the 5 -- 105 nm EUVAC model solar flux [Richards et al., 1994] and (2) the GLOW model [Solomon et al., 1988; Solomon and Abreu, 1989] suitably modified for inclusion of XUV spectral components and photoelectron flux. Across parameter space and for all altitudes, Model 2 provides a closer match to ISR mean Ne and suggests that the photoelectron and XUV enhancements are essential to replicate measured plasma densities below 150 km. Simulated Ne variance envelopes, given by perturbing the Model 2 neutral atmosphere input by the measured extremum in Ap, F10.7, and Te, are much narrower than ISR-derived geophysical variance envelopes. We thus conclude that long-term variability of the EUV spectra dominates over thermospheric variability and that EUV spectral variability is greatest at solar minimum. ISR -- model comparison also provides evidence for the emergence of an H (Lyman β) Ne feature at solar maximum. Richards, P. G., J. A. Fennelly, and D. G. Torr, EUVAC: A solar EUV flux model for aeronomic calculations, J. Geophys. Res., 99, 8981, 1994. Solomon, S. C., P. B. Hays, and V. J. Abreu, The auroral 6300 Å emission: Observations and Modeling, J. Geophys. Res., 93, 9867, 1988. Solomon, S. C. and V. J. Abreu, The 630 nm dayglow, J. Geophys. Res., 94, 6817, 1989.

  1. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  2. Electron Pitch-Angle Distribution in Pressure Balance Structures Measured by Ulysses/SWOOPS

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Six, N. Frank (Technical Monitor)

    2002-01-01

    Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. From previous studies, PBSs are believed to be remnants of coronal plumes. Yamauchi et al [2002] investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. They found that PBSs contain structures like current sheets or plasmoids, and suggested that PBSs are associated with network activity such as magnetic reconnection in the photosphere at the base of polar plumes. We have investigated energetic electron data from Ulysses/SWOOPS to see whether bi-directional electron flow exists and we have found evidence supporting the earlier conclusions. We find that 45 ot of 53 PBSs show local bi-directional or isotopic electron flux or flux associated with current-sheet structure. Only five events show the pitch-angle distribution expected for Alfvenic fluctuations. We conclude that PBSs do contain magnetic structures such as current sheets or plasmoids that are expected as a result of network activity at the base of polar plumes.

  3. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China

    PubMed Central

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-01-01

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064

  4. Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.

    PubMed

    Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya

    2013-08-20

    It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.

  5. Directly Estimating Earthquake Rupture Area using Second Moments to Reduce the Uncertainty in Stress Drop

    NASA Astrophysics Data System (ADS)

    McGuire, Jeffrey J.; Kaneko, Yoshihiro

    2018-06-01

    The key kinematic earthquake source parameters: rupture velocity, duration and area, shed light on earthquake dynamics, provide direct constraints on stress-drop, and have implications for seismic hazard. However, for moderate and small earthquakes, these parameters are usually poorly constrained due to limitations of the standard analysis methods. Numerical experiments by Kaneko and Shearer [2014,2015] demonstrated that standard spectral fitting techniques can lead to roughly 1 order of magnitude variation in stress-drop estimates that do not reflect the actual rupture properties even for simple crack models. We utilize these models to explore an alternative approach where we estimate the rupture area directly. For the suite of models, the area averaged static stress drop is nearly constant for models with the same underlying friction law, yet corner frequency based stress-drop estimates vary by a factor of 5-10 even for noise free data. Alternatively, we simulated inversions for the rupture area as parameterized by the second moments of the slip distribution. A natural estimate for the rupture area derived from the second moments is A=πLcWc, where Lc and Wc are the characteristic rupture length and width. This definition yields estimates of stress drop that vary by only 10% between the models but are slightly larger than the true area-averaged values. We simulate inversions for the second moments for the various models and find that the area can be estimated well when there are at least 15 available measurements of apparent duration at a variety of take-off angles. The improvement compared to azimuthally-averaged corner-frequency based approaches results from the second moments accounting for directivity and removing the assumption of a circular rupture area, both of which bias the standard approach. We also develop a new method that determines the minimum and maximum values of rupture area that are consistent with a particular dataset at the 95% confidence level. For the Kaneko and Shearer models with 20+ randomly distributed observations and ˜10% noise levels, we find that the maximum and minimum bounds on rupture area typically vary by a factor of two and that the minimum stress drop is often more tightly constrained than the maximum.

  6. Projected increase in lightning strikes in the United States due to global warming

    NASA Astrophysics Data System (ADS)

    Romps, David M.; Seeley, Jacob T.; Vollaro, David; Molinari, John

    2014-11-01

    Lightning plays an important role in atmospheric chemistry and in the initiation of wildfires, but the impact of global warming on lightning rates is poorly constrained. Here we propose that the lightning flash rate is proportional to the convective available potential energy (CAPE) times the precipitation rate. Using observations, the product of CAPE and precipitation explains 77% of the variance in the time series of total cloud-to-ground lightning flashes over the contiguous United States (CONUS). Storms convert CAPE times precipitated water mass to discharged lightning energy with an efficiency of 1%. When this proxy is applied to 11 climate models, CONUS lightning strikes are predicted to increase 12 ± 5% per degree Celsius of global warming and about 50% over this century.

  7. Sparse and stable Markowitz portfolios.

    PubMed

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-07-28

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.

  8. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  9. Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies

    PubMed Central

    Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne

    2014-01-01

    Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111

  10. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    PubMed

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  11. Reduced population variance in strontium isotope values informs domesticated turkey use at Chaco Canyon, New Mexico, USA

    USGS Publications Warehouse

    Grimstead, Deanna N; Reynolds, Amanda C; Hudson, Adam M; Akins, Nancy J; Betancourt, Julio L.

    2016-01-01

    Traditionally strontium isotopes (87Sr/86Sr) have been used as a sourcing tool in numerous archaeological artifact classes. The research presented here demonstrates that 87Sr/86Srbioapatite ratios also can be used at a population level to investigate the presence of domesticated animals and methods of management. The proposed methodology combines ecology, isotope geochemistry, and behavioral ecology to assess the presence and nature of turkey (Meleagris gallopavo) domestication. This case study utilizes 87Sr/86Srbioapatite ratios from teeth and bones of archaeological turkey, deer (Odocoileus sp.), lagomorph (Lepus sp. and Sylvilagus sp.), and prairie-dog (Cynomys sp.) from Chaco Canyon, New Mexico, U.S.A. (ca. A.D. 800 – 1250). Wild deer and turkey from the southwestern U.S.A. have much larger home ranges and dispersal behaviors (measured in kilometers) when compared to lagomorphs and prairie dogs (measured in meters). Hunted deer and wild turkey from archaeological contexts at Chaco Canyon are expected to have a higher variance in their 87Sr/86Srbioapatite ratios, when compared to small range taxa (lagomorphs and prairie dogs). Contrary to this expectation, 87Sr/86Srbioapatite values of turkey bones from Chacoan assemblages have a much lower variance than deer and are similar to that of smaller mammals. The sampled turkey values show variability most similar to lagomorphs and prairie dogs, suggesting the turkeys from Chaco Canyon were consuming a uniform diet and/or were constrained within a limited home range, indicating at least proto-domestication. The population approach has wide applicability for evaluating the presence and nature of domestication when combined with paleoecology and behavioral ecology in a variety of animals and environments.

  12. A study of changes in genetic and environmental influences on weight and shape concern across adolescence.

    PubMed

    Wade, Tracey D; Hansell, Narelle K; Crosby, Ross D; Bryant-Waugh, Rachel; Treasure, Janet; Nixon, Reginald; Byrne, Susan; Martin, Nicholas G

    2013-02-01

    The goal of the current study was to examine whether genetic and environmental influences on an important risk factor for disordered eating, weight and shape concern, remained stable over adolescence. This stability was assessed in 2 ways: whether new sources of latent variance were introduced over development and whether the magnitude of variance contributing to the risk factor changed. We examined an 8-item WSC subscale derived from the Eating Disorder Examination (EDE) using telephone interviews with female adolescents. From 3 waves of data collected from female-female same-sex twin pairs from the Australian Twin Registry, a subset of the data (which included 351 pairs at Wave 1) was used to examine 3 age cohorts: 12 to 13, 13 to 15, and 14 to 16 years. The best-fitting model contained genetic and environmental influences, both shared and nonshared. Biometric model fitting indicated that nonshared environmental influences were largely specific to each age cohort, and results suggested that latent shared environmental and genetic influences that were influential at 12 to 13 years continued to contribute to subsequent age cohorts, with independent sources of both emerging at ages 13 to 15. The magnitude of all 3 latent influences could be constrained to be the same across adolescence. Ages 13 to 15 were indicated as a time of risk for the development of high levels of WSC, given that most specific environmental risk factors were significant at this time (e.g., peer teasing about weight, adverse life events), and indications of the emergence of new sources of latent genetic and environmental variance over this period. 2013 APA, all rights reserved

  13. Cosmic Bulk Flow and the Local Motion from Cosmicflows-2

    NASA Astrophysics Data System (ADS)

    Courtois, Helene M.; Hoffman, Yehuda; Tully, R. Brent

    2015-08-01

    Full sky surveys of peculiar velocity are arguably the best way to map the large scale structure out to distances of a few times 100 Mpc/h.Using the largest and most accurate ever catalog of galaxy peculiar velocities Cosmicflows-2, the large scale structure has been reconstructed by means of the Wiener filter and constrained realizations assuming as a Bayesian prior model the LCDM standard model of cosmology. The present paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R=500 Mpc/h. Our main results is that the estimated bulk flow is consistent with the LCDM model with the WMAP inferred cosmological parameters. At R=50 (150)Mpc/h the estimated bulk velocity is 250 +/- 21 (239 +/- 38) km/s. The corresponding cosmic variance at these radii is 126 (60) km/s, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R ˜200 Mpc/h, where the cosmic variance on the individual Supergalactic Cartesian components (of the r.m.s. values) exceeds the variance of the constrined realizations by at least a factor of 2. The SGX and SGY components of the CMB dipole velocity are recovered by the Wiener Filter velocity field down to a very few km/s. The SGZ component of the estimated velocity, the one that is most affected by the Zone of Avoidance, is off by 126km/s (an almost 2 sigma discrepancy).The bulk velocity analysis reported here is virtually unaffected by the Malmquist bias and very similar results are obtained for the data with and without the bias correction.

  14. Constraining Landscape History and Glacial Erosivity Using Paired Cosmogenic Nuclides in Upernavik, Northwest Greenland

    NASA Technical Reports Server (NTRS)

    Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.

    2013-01-01

    High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).

  15. Three-Dimensional Distribution of Larval Fish Habitats in the Shallow Oxygen Minimum Zone in the Eastern Tropical Pacific Ocean off Mexico

    NASA Astrophysics Data System (ADS)

    Davies, S.; Sanchez Velasco, L.; Beier, E.; Godinez, V. M.; Barton, E. D.; Tamayo, A.

    2016-02-01

    Three-dimensional distribution of larval fish habitats was analyzed, from the upper limit of the shallow oxygen minimum zone ( 0.2 mL/L) to the sea surface, in the eastern tropical Pacific Ocean off Mexico in February 2010.The upper limit rises from 250 m depth in the entrance of the Gulf of California to 80 m depth off Cabo Corrientes. Three larval fish habitats were defined statistically: (i) a Gulf of California habitat dominated by Anchoa spp. larvae (epipelagic species), constrained to the oxygenated surface layer (>3.5 mL/L) in and above the thermocline ( 60 m depth), and separated by a salinity front from the Tropical Pacific habitat; (ii) a Tropical Pacific habitat, dominated by Vinciguerria lucetia larvae (mesopelagic species), located throughout the sampled water column, but with the highest abundance in the oxygenated upper layer above the thermocline; (iii) an Oxygen Minimum habitat defined mostly below the thermocline in hypoxic (<1 mL/L; 70 m depth) and anoxic (<0.2 mL/L; 80 m depth) water off Cabo Corrientes. This subsurface hypoxic habitat had the highest species richness and larval abundance, with dominance of Bregmaceros bathymaster, an endemic neritic pelagic species; which was an unexpected result. This maybe associated with the shoaling of the upper limit of the shallow oxygen minimum zone near the coast, a result of the strong costal upwelling detected by the Bakun Index. In this region of strong and semi-continuous coastal upwelling in the eastern tropical Pacific off Mexico, the shallow hypoxic water does not have dramatic effects on the total larval fish abundance but appears to affect species composition.

  16. Applications of GARCH models to energy commodities

    NASA Astrophysics Data System (ADS)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.

  17. Unique relation between surface-limited evaporation and relative humidity profiles holds in both field data and climate model simulations

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.

    2013-12-01

    A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.

  18. Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Richardson, I. G.; Ling, A. G.

    2011-01-01

    In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.

  19. New Observations of the Crab Nebula and Pulsar

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; Tennant, Allyn F.; ODell, Stephen L.; Elsner, Ronald f.; Yakovlev, Dmitry R.; Zavlin, Vyacheslav E.; Becker, Werner

    2010-01-01

    We present a phase-resolved study of the X-ray spectrum of the Crab Pulsar, using data obtained in a special mode with the Chandra X-ray Observatory. The superb angular resolution easily enables discerning the Pulsar from the surrounding nebulosity, even at pulse minimum. We find that the Pulsar's X-ray spectral index varies sinusoidally with phase---except over the same phase range for which rather abrupt changes in optical polarization magnitude and position angle have been reported. In addition, we use the X-ray data to constrain the surface temperature for various neutron-star equations of state and atmospheres. Finally, we present new data on dynamical variations of structure within the Nebula.

  20. Activity Cycles in Stars

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.

    2009-01-01

    Starspots and stellar activity can be detected in other stars using high precision photometric and spectrometric measurements. These observations have provided some surprises (starspots at the poles - sunspots are rarely seen poleward of 40 degrees) but more importantly they reveal behaviors that constrain our models of solar-stellar magnetic dynamos. The observations reveal variations in cycle characteristics that depend upon the stellar structure, convection zone dynamics, and rotation rate. In general, the more rapidly rotating stars are more active. However, for stars like the Sun, some are found to be inactive while nearly identical stars are found to be very active indicating that periods like the Sun's Maunder Minimum (an inactive period from 1645 to 1715) are characteristic of Sun-like stars.

Top