The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Goldstein, M. L.
2006-01-01
We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Charged particle tracking at Titan, and further applications
NASA Astrophysics Data System (ADS)
Bebesi, Zsofia; Erdos, Geza; Szego, Karoly
2016-04-01
We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
2014-03-27
42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM
A test of source-surface model predictions of heliospheric current sheet inclination
NASA Technical Reports Server (NTRS)
Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.
1994-01-01
The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-14
... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
NASA Technical Reports Server (NTRS)
Grappin, R.; Velli, M.
1995-01-01
The solar wind is not an isotropic medium; two symmetry axis are provided, first the radial direction (because the mean wind is radial) and second the spiral direction of the mean magnetic field, which depends on heliocentric distance. Observations show very different anisotropy directions, depending on the frequency waveband; while the large-scale velocity fluctuations are essentially radial, the smaller scale magnetic field fluctuations are mostly perpendicular to the mean field direction, which is not the expected linear (WkB) result. We attempt to explain how these properties are related, with the help of numerical simulations.
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...
Additive-Multiplicative Approximation of Genotype-Environment Interaction
Gimelfarb, A.
1994-01-01
A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
Electron Heat Flux in Pressure Balance Structures at Ulysses
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. Rom previous studies, PBSs are believed to be remnants of coronal plumes and be related to network activity such as magnetic reconnection in the photosphere. We investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. At 2001 AGU Spring meeting, we reported that PBSs have structures like current sheets or plasmoids, and suggested that they are associated with network activity at the base of polar plumes. In this paper, we have analyzed high-energy electron data at Ulysses/SWOOPS to see whether bi-directional electron flow exists and confirm the conclusions more precisely. As a result, although most events show a typical flux directed away from the Sun, we have obtained evidence that some PBSs show bi-directional electron flux and others show an isotropic distribution of electron pitch angles. The evidence shows that plasmoids are flowing away from the Sun, changing their flow direction dynamically in a way not caused by Alfven waves. From this, we have concluded that PBSs are generated due to network activity at the base of polar plumes and their magnetic structures axe current sheets or plasmoids.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
NASA Astrophysics Data System (ADS)
Kohán, Balázs; Tyler, Jonathan; Jones, Matthew; Kern, Zoltán
2017-04-01
Water stable isotopes are important natural tracers in the hydrological cycle on global, regional and local scales. Daily precipitation water samples were collected from 70 sites over the British Isles on the 23rd, 24th, and 25th January, 2012 [1]. Samples were collected as part of a pilot study for the British Isotopes in Rainfall Project, a community engagement initiative, in collaboration with volunteer weather observers and the UK Met Office. Spatial correlation structure of daily precipitation stable oxygen isotope composition (δ18OP) has been explored by variogram analysis [2]. Since the variograms from the raw data suggested a pronounced trend, owing to the spatial trend discussed in the original study [1], a second order polynomial trend was removed from the raw δ18OP data and variograms were calculated on the residuals. Directional experimental semivariograms were calculated (steps: 10°, tolerance: 30°) and aggregated into variogram surface plots to explore the spatial dependence structure of daily δ18OP. Each daily data set produced distinct variogram plots. -A well expressed anisotropic structure can be seen for Jan 23. The lowest and highest variance was observed in the SW-NE and NNE-SSW direction, respectively. Meteorological observations showed that the majority of the atmospheric flow was SW on this day, so the direction of low variance seems to reflect this flow direction, while the maximum variance might reflect the moisture variance near the elongation of the frontal system. -A less characteristic but still expressed anisotropic structure was found for Jan 24 when a warm front passed the British Isles perpendicular to the east coast, leading to a characteristic east-west δ18OP gradient suggestive of progressive rainout. The low variance central zone has a 100 km radius which might correspond well to the width of the warm front zone. Although, the axis of minimum variance was similarly SW-NE, the zone of maximum variance was broader and practically perpendicular to it. In this case, however, directions of the axes appear misaligned with the flow direction. -We could not observe similar characteristic patterns in the last variogram calculated from the Jan 25 data set. These preliminary results suggest that variogram analysis is a promising approach to link δ18OP patterns to atmospheric processes. NKFIH: SNN118205/ARRS: N1-0054 References 1.Tyler, J. J., Jones, M., Arrowsmith, C., Allott, T., & Leng, M. J. (2016). Spatial patterns in the oxygen isotope composition of daily rainfall in the British Isles. Climate Dynamics 47:1971-1987 2.Webster, R. Oliver M.A. (2007) Geostatistics for Environmental Scientists. John Wiley & Sons, Chichester
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
A New Look at Some Solar Wind Turbulence Puzzles
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2006-01-01
Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping
2016-02-01
Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.
Future mission studies: Preliminary comparisons of solar flux models
NASA Technical Reports Server (NTRS)
Ashrafi, S.
1991-01-01
The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Electron Pitch-Angle Distribution in Pressure Balance Structures Measured by Ulysses/SWOOPS
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Six, N. Frank (Technical Monitor)
2002-01-01
Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. From previous studies, PBSs are believed to be remnants of coronal plumes. Yamauchi et al [2002] investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. They found that PBSs contain structures like current sheets or plasmoids, and suggested that PBSs are associated with network activity such as magnetic reconnection in the photosphere at the base of polar plumes. We have investigated energetic electron data from Ulysses/SWOOPS to see whether bi-directional electron flow exists and we have found evidence supporting the earlier conclusions. We find that 45 ot of 53 PBSs show local bi-directional or isotopic electron flux or flux associated with current-sheet structure. Only five events show the pitch-angle distribution expected for Alfvenic fluctuations. We conclude that PBSs do contain magnetic structures such as current sheets or plasmoids that are expected as a result of network activity at the base of polar plumes.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Minimal Model of Prey Localization through the Lateral-Line System
NASA Astrophysics Data System (ADS)
Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo
2003-10-01
The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
The response of neurons in areas V1 and MT of the alert rhesus monkey to moving random dot patterns.
Snowden, R J; Treue, S; Andersen, R A
1992-01-01
We studied the response of single units to moving random dot patterns in areas V1 and MT of the alert macaque monkey. Most cells could be driven by such patterns; however, many cells in V1 did not give a consistent response but fired at a particular point during stimulus presentation. Thus different dot patterns can produce a markedly different response at any particular time, though the time averaged response is similar. A comparison of the directionality of cells in both V1 and MT using random dot patterns shows the cells of MT to be far more directional. In addition our estimates of the percentage of directional cells in both areas are consistent with previous reports using other stimuli. However, we failed to find a bimodality of directionality in V1 which has been reported in some other studies. The variance associated with response was determined for individual cells. In both areas the variance was found to be approximately equal to the mean response, indicating little difference between extrastriate and striate cortex. These estimates are in broad agreement (though the variance appears a little lower) with those of V1 cells of the anesthetized cat. The response of MT cells was simulated on a computer from the estimates derived from the single unit recordings. While the direction tuning of MT cells is quite wide (mean half-width at half-height approximately 50 degrees) it is shown that the cells can reliably discriminate much smaller changes in direction, and the performance of the cells with the smallest discriminanda were comparable to thresholds measured with human subjects using the same stimuli (approximately 1.1 degrees). Minimum discriminanda for individual cells occurred not at the preferred direction, that is, the peak of their tuning curves, but rather on the steep flanks of their tuning curves. This result suggests that the cells which may mediate the discrimination of motion direction may not be the cells most sensitive to that direction.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Lekking without a paradox in the buff-breasted sandpiper
Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.
1997-01-01
Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
NASA Technical Reports Server (NTRS)
Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)
2001-01-01
A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.
Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R
2012-01-01
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
A method for minimum risk portfolio optimization under hybrid uncertainty
NASA Astrophysics Data System (ADS)
Egorova, Yu E.; Yazenin, A. V.
2018-03-01
In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza
2018-02-01
In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Some refinements on the comparison of areal sampling methods via simulation
Jeffrey Gove
2017-01-01
The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...
NASA Astrophysics Data System (ADS)
Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.
2016-05-01
The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.
Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V
2014-12-01
Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.
Measuring the Power Spectrum with Peculiar Velocities
NASA Astrophysics Data System (ADS)
Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-01-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Power spectrum estimation from peculiar velocity catalogues
NASA Astrophysics Data System (ADS)
Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.
2012-09-01
The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado
Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.
2003-01-01
The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.
Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region
NASA Astrophysics Data System (ADS)
Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.
2005-08-01
Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen
2018-01-31
It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
NASA Astrophysics Data System (ADS)
J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul
2015-05-01
In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Goldstein, Melvyn L.; Roberts, D. Aaron
1990-01-01
Assuming that the slab and isotropic models of solar wind turbulence need modification (largely due to the observed anisotropy of the interplanetary fluctuations and the results of laboratory plasma experiments), this paper proposes a model of the solar wind. The solar wind is seen as a fluid which contains both classical transverse Alfvenic fluctuations and a population of quasi-transverse fluctuations. In quasi-two-dimensional turbulence, the pitch angle scattering by resonant wave-particle interactions is suppressed, and the direction of minimum variance of interplanetary fluctuations is parallel to the mean magnetic field. The assumed incompressibility is consistent with the fact that the density fluctuations are small and anticorrelated, and that the total pressure at small scales is nearly constant.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurements
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2002-01-01
Measurements of the interplanetary magnetic field (IMF) from the ACE (Advanced Composition Explorer), Wind, IMP-8 (Interplanetary Monitoring Platform), and Geotail spacecraft have revealed that the IMF variations are contained in phase planes that are tilted with respect to the propagation direction, resulting in continuously variable changes in propagation times between spacecraft, and therefore, to the Earth. Techniques for using 'minimum variance analysis' have been developed in order to be able to measure the phase front tilt angles, and better predict the actual propagation times from the L1 orbit to the Earth, using only the real-time IMF measurements from one spacecraft. The use of empirical models with the IMF measurements at L1 from ACE (or future satellites) for predicting 'space weather' effects has also been demonstrated.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Orientations of dendritic growth during solidification
NASA Astrophysics Data System (ADS)
Lee, Dong Nyung
2017-03-01
Dendrites are crystalline forms which grow far from the limit of stability of the plane front and adopt an orientation which is as close as possible to the heat flux direction. Dendritic growth orientations for cubic metals, bct Sn, and hcp Zn, can be controlled by thermal conductivity, Young's modulus, and surface energy. The control factors have been elaborated. Since the dendrite is a single crystal, its properties such as thermal conductivity that influences the heat flux direction, the minimum Young's modulus direction that influences the strain energy minimization, and the minimum surface energy plane that influences the crystal/liquid interface energy minimization have been proved to control the dendritic growth direction. The dendritic growth directions of cubic metals are determined by the minimum Young's modulus direction and/or axis direction of symmetry of the minimum crystal surface energy plane. The dendritic growth direction of bct Sn is determined by its maximum thermal conductivity direction and the minimum surface energy plane normal direction. The primary dendritic growth direction of hcp Zn is determined by its maximum thermal conductivity direction and the minimum surface energy plane normal direction and the secondary dendrite arm direction of hcp Zn is normal to the primary dendritic growth direction.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level–based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level–based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls. PMID:29442533
Brühl, Albert; Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level-based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level-based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls.
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Code of Federal Regulations, 2014 CFR
2014-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
Code of Federal Regulations, 2013 CFR
2013-04-01
... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...
2018-01-01
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Husby, Arild; Gustafsson, Lars; Qvarnström, Anna
2012-01-01
The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
NASA Astrophysics Data System (ADS)
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
ERIC Educational Resources Information Center
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
Application of inertial instruments for DSN antenna pointing and tracking
NASA Technical Reports Server (NTRS)
Eldred, D. B.; Nerheim, N. M.; Holmes, K. G.
1990-01-01
The feasibility of using inertial instruments to determine the pointing attitude of the NASA Deep Space Network antennas is examined. The objective is to obtain 1 mdeg pointing knowledge in both blind pointing and tracking modes to facilitate operation of the Deep Space Network 70 m antennas at 32 GHz. A measurement system employing accelerometers, an inclinometer, and optical gyroscopes is proposed. The initial pointing attitude is established by determining the direction of the local gravity vector using the accelerometers and the inclinometer, and the Earth's spin axis using the gyroscopes. Pointing during long-term tracking is maintained by integrating the gyroscope rates and augmenting these measurements with knowledge of the local gravity vector. A minimum-variance estimator is used to combine measurements to obtain the antenna pointing attitude. A key feature of the algorithm is its ability to recalibrate accelerometer parameters during operation. A survey of available inertial instrument technologies is also given.
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Fast computation of an optimal controller for large-scale adaptive optics.
Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc
2011-11-01
The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Small-scale Pressure-balanced Structures Driven by Mirror-mode Waves in the Solar Wind
NASA Astrophysics Data System (ADS)
Yao, Shuo; He, J.-S.; Tu, C.-Y.; Wang, L.-H.; Marsch, E.
2013-10-01
Recently, small-scale pressure-balanced structures (PBSs) have been studied with regard to their dependence on the direction of the local mean magnetic field B0 . The present work continues these studies by investigating the compressive wave mode forming small PBSs, here for B0 quasi-perpendicular to the x-axis of Geocentric Solar Ecliptic coordinates (GSE-x). All the data used were measured by WIND in the quiet solar wind. From the distribution of PBSs on the plane determined by the temporal scale and angle θxB between the GSE-x and B0 , we notice that at θxB = 115° the PBSs appear at temporal scales ranging from 700 s to 60 s. In the corresponding temporal segment, the correlations between the plasma thermal pressure P th and the magnetic pressure P B, as well as that between the proton density N p and the magnetic field strength B, are investigated. In addition, we use the proton velocity distribution functions to calculate the proton temperatures T and T ∥. Minimum Variance Analysis is applied to find the magnetic field minimum variance vector BN . We also study the time variation of the cross-helicity σc and the compressibility C p and compare these with values from numerical predictions for the mirror mode. In this way, we finally identify a short segment that has T > T ∥, proton β ~= 1, both pairs of P th-P B and N p-B showing anti-correlation, and σc ≈ 0 with C p > 0. Although the examination of σc and C p is not conclusive, it provides helpful additional information for the wave mode identification. Additionally, BN is found to be highly oblique to B0 . Thus, this work suggests that a candidate mechanism for forming small-scale PBSs in the quiet solar wind is due to mirror-mode waves.
VARIANCE ANISOTROPY IN KINETIC PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Enhanced backscatter of a reflected beam in atmospheric turbulence
NASA Astrophysics Data System (ADS)
Churnside, James H.; Wilson, James J.
1993-05-01
We measure the mean and the variance of the irradiance of a diverging laser beam after reflection from a retroreflector and from a plane mirror in a turbulent atmosphere. Increases in both the mean irradiance and the normalized variance are observed in the direct backscatter direction because of correlation of turbulence on the outgoing path and the return path. The backscattered irradiance is enhanced by a factor of about 2 and the variance by somewhat less.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Direct and indirect genetic and fine-scale location effects on breeding date in song sparrows.
Germain, Ryan R; Wolak, Matthew E; Arcese, Peter; Losdat, Sylvain; Reid, Jane M
2016-11-01
Quantifying direct and indirect genetic effects of interacting females and males on variation in jointly expressed life-history traits is central to predicting microevolutionary dynamics. However, accurately estimating sex-specific additive genetic variances in such traits remains difficult in wild populations, especially if related individuals inhabit similar fine-scale environments. Breeding date is a key life-history trait that responds to environmental phenology and mediates individual and population responses to environmental change. However, no studies have estimated female (direct) and male (indirect) additive genetic and inbreeding effects on breeding date, and estimated the cross-sex genetic correlation, while simultaneously accounting for fine-scale environmental effects of breeding locations, impeding prediction of microevolutionary dynamics. We fitted animal models to 38 years of song sparrow (Melospiza melodia) phenology and pedigree data to estimate sex-specific additive genetic variances in breeding date, and the cross-sex genetic correlation, thereby estimating the total additive genetic variance while simultaneously estimating sex-specific inbreeding depression. We further fitted three forms of spatial animal model to explicitly estimate variance in breeding date attributable to breeding location, overlap among breeding locations and spatial autocorrelation. We thereby quantified fine-scale location variances in breeding date and quantified the degree to which estimating such variances affected the estimated additive genetic variances. The non-spatial animal model estimated nonzero female and male additive genetic variances in breeding date (sex-specific heritabilities: 0·07 and 0·02, respectively) and a strong, positive cross-sex genetic correlation (0·99), creating substantial total additive genetic variance (0·18). Breeding date varied with female, but not male inbreeding coefficient, revealing direct, but not indirect, inbreeding depression. All three spatial animal models estimated small location variance in breeding date, but because relatedness and breeding location were virtually uncorrelated, modelling location variance did not alter the estimated additive genetic variances. Our results show that sex-specific additive genetic effects on breeding date can be strongly positively correlated, which would affect any predicted rates of microevolutionary change in response to sexually antagonistic or congruent selection. Further, we show that inbreeding effects on breeding date can also be sex specific and that genetic effects can exceed phenotypic variation stemming from fine-scale location-based variation within a wild population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shojaei, M; Dumitru, N; Pella, S
2016-06-15
Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Experimental study on an FBG strain sensor
NASA Astrophysics Data System (ADS)
Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng
2018-01-01
Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
12 CFR 325.6 - Issuance of directives.
Code of Federal Regulations, 2010 CFR
2010-01-01
... is a final order issued to a bank that fails to maintain capital at or above the minimum leverage... operating with less than the minimum leverage capital requirement established by this regulation, the Board... directive requiring the bank to restore its capital to the minimum leverage capital requirement within a...
Mass, R
2005-09-01
This study is the first to directly compare two clinical questionnaires which are both aimed at self-experienced cognitive dysfunctions of schizophrenia: Eppendorf Schizophrenia Inventory (ESI) and Frankfurt Complaint Questionnaire (FCQ). Evaluated were (a) diagnostic validity, (b) psychometric properties, (c) scale intercorrelations, and (d) factor analytic stability. Ad (a): schizophrenic subjects (n=36) show highly significant increases in the ESI scales and sum score when compared to other clinical groups (patients with depression, alcohol dependence, or obsessive-compulsive disorder, n>30, respectively); on the other hand, the FCQ yields no systematic group differences. Ad (b): mean of reliability coefficients (Cronbach alpha) of the ESI scales is r(tt)=0.86, mean of reliability coefficients of the FCQ scales is significantly lower. Ad (c): the mean intercorrelation between ESI and FCQ scales amounts to r(xy)=0.56 (minimum 0.29, maximum 0.73), corresponding to an average shared variance of about 31%. Ad (d): factor analysis yielded an ESI factor and a FBF factor; one-way ANOVA with the factor scores confirms the diagnostic validity of the ESI. ESI and FCQ measure essentially different aspects of schizophrenic psychopathology. Regarding reliability and diagnostic validity, the ESI is superior to the FCQ.
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Comparison of progressive addition lenses by direct measurement of surface shape.
Huang, Ching-Yao; Raasch, Thomas W; Yi, Allen Y; Bullimore, Mark A
2013-06-01
To compare the optical properties of five state-of-the-art progressive addition lenses (PALs) by direct physical measurement of surface shape. Five contemporary freeform PALs (Varilux Comfort Enhanced, Varilux Physio Enhanced, Hoya Lifestyle, Shamir Autograph, and Zeiss Individual) with plano distance power and a +2.00-diopter add were measured with a coordinate measuring machine. The front and back surface heights were physically measured, and the optical properties of each surface, and their combination, were calculated with custom MATLAB routines. Surface shape was described as the sum of Zernike polynomials. Progressive addition lenses were represented as contour plots of spherical equivalent power, cylindrical power, and higher order aberrations (HOAs). Maximum power rate, minimum 1.00-DC corridor width, percentage of lens area with less than 1.00 DC, and root mean square of HOAs were also compared. Comfort Enhanced and Physio Enhanced have freeform front surfaces, Shamir Autograph and Zeiss Individual have freeform back surfaces, and Hoya Lifestyle has freeform properties on both surfaces. However, the overall optical properties are similar, regardless of the lens design. The maximum power rate is between 0.08 and 0.12 diopters per millimeter and the minimum corridor width is between 8 and 11 mm. For a 40-mm lens diameter, the percentage of lens area with less than 1.00 DC is between 64 and 76%. The third-order Zernike terms are the dominant high-order terms in HOAs (78 to 93% of overall shape variance). Higher order aberrations are higher along the corridor area and around the near zone. The maximum root mean square of HOAs based on a 4.5-mm pupil size around the corridor area is between 0.05 and 0.06 µm. This nonoptical method using a coordinate measuring machine can be used to evaluate a PAL by surface height measurements, with the optical properties directly related to its front and back surface designs.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2013-07-01
Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
ERIC Educational Resources Information Center
Johnson, Jim
2017-01-01
A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
GIS-based niche modeling for mapping species' habitats
Rotenberry, J.T.; Preston, K.L.; Knick, S.
2006-01-01
Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Sztepanacz, Jacqueline L; Rundle, Howard D
2012-10-01
Directional selection is prevalent in nature, yet phenotypes tend to remain relatively constant, suggesting a limit to trait evolution. However, the genetic basis of this limit is unresolved. Given widespread pleiotropy, opposing selection on a trait may arise from the effects of the underlying alleles on other traits under selection, generating net stabilizing selection on trait genetic variance. These pleiotropic costs of trait exaggeration may arise through any number of other traits, making them hard to detect in phenotypic analyses. Stabilizing selection can be inferred, however, if genetic variance is greater among low- compared to high-fitness individuals. We extend a recently suggested approach to provide a direct test of a difference in genetic variance for a suite of cuticular hydrocarbons (CHCs) in Drosophila serrata. Despite strong directional sexual selection on these traits, genetic variance differed between high- and low-fitness individuals and was greater among the low-fitness males for seven of eight CHCs, significantly more than expected by chance. Univariate tests of a difference in genetic variance were nonsignificant but likely have low power. Our results suggest that further CHC exaggeration in D. serrata in response to sexual selection is limited by pleiotropic costs mediated through other traits. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Remote facility variances for utilization review... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... such facility or direct responsibility for the care of the patients being reviewed or, in the case of a...
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi
2002-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
NASA Technical Reports Server (NTRS)
Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.
Li, Jielin; Hassebrook, Laurence G; Guan, Chun
2003-01-01
Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Meghan K.; Argall, Matthew R.; Joyce, Colin J., E-mail: mkl54@wildcats.unh.edu, E-mail: Matthew.Argall@unh.edu, E-mail: cjl46@wildcats.unh.edu
We report observations of low-frequency waves at 1 au by the magnetic field instrument on the Advanced Composition Explorer ( ACE /MAG) and show evidence that they arise due to newborn interstellar pickup He{sup +}. Twenty-five events are studied. They possess the generally predicted attributes: spacecraft-frame frequencies slightly greater than the He{sup +} cyclotron frequency, left-hand polarization in the spacecraft frame, and transverse fluctuations with minimum variance directions that are quasi-parallel to the mean magnetic field. Their occurrence spans the first 18 years of ACE operations, with no more than 3 such observations in any given year. Thus, the eventsmore » are relatively rare. As with past observations by the Ulysses and Voyager spacecraft, we argue that the waves are seen only when the background turbulence is sufficiently weak as to allow for the slow accumulation of wave energy over many hours.« less
Experimental demonstration of quantum teleportation of a squeezed state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro
2005-10-15
Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
The Water Level and Transport Regimes of the Lower Columbia River
NASA Astrophysics Data System (ADS)
Jay, D. A.
2011-12-01
Tidal rivers are vital, spatially extensive conduits of material from land to sea. Yet the tidal-fluvial regime remains poorly understood relative to the bordering fluvial and estuarine/coastal regimes with which it interacts. The 235km-long Lower Columbia River (LCR) consists of five zones defined by topographic constrictions: a 5km-long ocean-entrance, the lower estuary (15km), an energy-minimum (67km), the tidal river (142km), and a landslide zone (5km). Buoyant plume lift-off occurs within the entrance zone, which is dominated by tidal and wave energy. The lower estuary is strongly tidally, amplifies the semidiurnal tide, and has highly variable salinity intrusion. Tidal and fluvial influences are balanced in the wide energy-minimum, into which salinity intrudes during low-flow periods. It has a turbidity maximum and a dissipation minimum at its lower end, but a water-level variance minimum at its landward end. The tidal river shows a large increase in the ratio of fluvial-to-tidal energy in the landward direction and strong seasonal variations in tidal properties. Because tidal monthly water level variations are large, low waters are higher on spring than neap tides. The steep landslide zone has only weak tides and is the site of the most seaward hydropower dam. Like many dammed systems, the LCR has pseudo-tides: daily and weakly hydropower peaking waves that propagate seaward. Tidal constituent ratios vary in the alongchannel direction due to frictional non-linearities, the changing balance of dissipation vs. propagation, and power peaking. Long-term changes to the system have occurred due to climate change and direct human manipulation. Flood control, hydropower regulation, and diversion have reduced peak flows, total load and sand transport by ~45, 50 and 80%, respectively, causing a blue-shift in the flow and water level power spectra. Overbank flows have been largely eliminated through a redundant combination of diking and flow regulation. Export of sand to the ocean now occurs mainly through dredging, though fine sediment export may be higher than natural levels. Reduced sediment input and navigational development have reduced water levels in the upper tidal river by ~0.4/1.5m during low/high flow periods, impacting both navigation and shallow-water habitat availability. Tidal amplitudes have increased due both to increased coastal tides and reduced friction. This exacerbates difficulties with low-waters during fall neap tides. Climate-induced changes have so far had much less influence on system properties than human modifications. At present, regional sea level (RSL) rise and tectonic change are in balance, yielding no net sea level rise.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco
2002-01-01
The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Zeng, Xing; Chen, Cheng; Wang, Yuanyuan
2012-12-01
In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Penfield, Randall D.; Algina, James
2006-01-01
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
NASA Astrophysics Data System (ADS)
Volwerk, Martin; Goetz, Charlotte; Richter, Ingo; Delva, Magda; Ostaszewski, Katharina; Schwingenschuh, Konrad; Glassmeier, Karl-Heinz
2018-06-01
Context. The Rosetta Plasma Consortium (RPC) magnetometer (MAG) data during the tail excursion in March-April 2016 are used to investigate the magnetic structure of and activity in the tail region of the weakly outgassing comet 67P/Churyumov-Gerasimenko (67P). Aims: The goal of this study is to compare the large scale (near) tail structure with that of earlier missions to strong outgassing comets, and the small scale turbulent energy cascade (un)related to the singing comet phenomenon. Methods: The usual methods of space plasma physics are used to analyse the magnetometer data, such as minimum variance analysis, spectral analysis, and power law fitting. Also the cone angle and clock angle of the magnetic field are calculated to interpret the data. Results: It is found that comet 67P does not have a classical draped magnetic field and no bi-lobal tail structure at this late stage of the mission when the comet is already at 2.7 AU distance from the Sun. The main magnetic field direction seems to be more across the tail direction, which may implicate an asymmetric pick-up cloud. During periods of singing comet activity the propagation direction of the waves is at large angles with respect to the magnetic field and to the radial direction towards the comet. Turbulent cascade of magnetic energy from large to small scales is different in the presence of singing as without it.
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
12 CFR 3.6 - Minimum capital ratios.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Minimum Capital Ratios § 3.6 Minimum capital ratios. (a) Risk-based capital ratio. All...
Geophysical Inversion with Adaptive Array Processing of Ambient Noise
NASA Astrophysics Data System (ADS)
Traer, James
2011-12-01
Land-based seismic observations of microseisms generated during Tropical Storms Ernesto and Florence are dominated by signals in the 0.15--0.5Hz band. Data from seafloor hydrophones in shallow water (70m depth, 130 km off the New Jersey coast) show dominant signals in the gravity-wave frequency band, 0.02--0.18Hz and low amplitudes from 0.18--0.3Hz, suggesting significant opposing wave components necessary for DF microseism generation were negligible at the site. Both storms produced similar spectra, despite differing sizes, suggesting near-coastal shallow water as the dominant region for observed microseism generation. A mathematical explanation for a sign-inversion induced to the passive fathometer response by minimum variance distortionless response (MVDR) beamforming is presented. This shows that, in the region containing the bottom reflection, the MVDR fathometer response is identical to that obtained with conventional processing multiplied by a negative factor. A model is presented for the complete passive fathometer response to ocean surface noise, interfering discrete noise sources, and locally uncorrelated noise in an ideal waveguide. The leading order term of the ocean surface noise produces the cross-correlation of vertical multipaths and yields the depth of sub-bottom reflectors. Discrete noise incident on the array via multipaths give multiple peaks in the fathometer response. These peaks may obscure the sub-bottom reflections but can be attenuated with use of Minimum Variance Distortionless Response (MVDR) steering vectors. A theory is presented for the Signal-to-Noise-Ratio (SNR) for the seabed reflection peak in the passive fathometer response as a function of seabed depth, seabed reflection coefficient, averaging time, bandwidth and spatial directivity of the noise field. The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004, with 0.34s of averaging time. In the 2004 experiment, the response showed the array depth varied periodically with an amplitude of 1 m and a period of 7 s consistent with wave driven motion of the array. This introduced a destructive interference which prevents the SNR growing with averaging time, unless the motion is removed by use of a peak tracker.
A Study of the Southern Ocean: Mean State, Eddy Genesis & Demise, and Energy Pathways
NASA Astrophysics Data System (ADS)
Zajaczkovski, Uriel
The Southern Ocean (SO), due to its deep penetrating jets and eddies, is well-suited for studies that combine surface and sub-surface data. This thesis explores the use of Argo profiles and sea surface height ( SSH) altimeter data from a statistical point of view. A linear regression analysis of SSH and hydrographic data reveals that the altimeter can explain, on average, about 35% of the variance contained in the hydrographic fields and more than 95% if estimated locally. Correlation maxima are found at mid-depth, where dynamics are dominated by geostrophy. Near the surface, diabatic processes are significant, and the variance explained by the altimeter is lower. Since SSH variability is associated with eddies, the regression of SSH with temperature (T) and salinity (S) shows the relative importance of S vs T in controlling density anomalies. The AAIW salinity minimum separates two distinct regions; above the minimum density changes are dominated by T, while below the minimum S dominates over T. The regression analysis provides a method to remove eddy variability, effectively reducing the variance of the hydrographic fields. We use satellite altimetry and output from an assimilating numerical model to show that the SO has two distinct eddy motion regimes. North and south of the Antarctic Circumpolar Current (ACC), eddies propagate westward with a mean meridional drift directed poleward for cyclonic eddies (CEs) and equatorward for anticyclonic eddies (AEs). Eddies formed within the boundaries of the ACC have an effective eastward propagation with respect to the mean deep ACC flow, and the mean meridional drift is reversed, with warm-core AEs propagating poleward and cold-core CEs propagating equatorward. This circulation pattern drives downgradient eddy heat transport, which could potentially transport a significant fraction (24 to 60 x 1013 W) of the net poleward ACC eddy heat flux. We show that the generation of relatively large amplitude eddies is not a ubiquitous feature of the SO but rather a phenomenon that is constrained to five isolated, well-defined "hotspots". These hotspots are located downstream of major topographic features, with their boundaries closely following f/H contours. Eddies generated in these locations show no evidence of a bias in polarity and decay within the boundaries of the generation area. Eddies tend to disperse along f/H contours rather than following lines of latitude. We found enhanced values of both buoyancy (BP) and shear production (SP) inside the hotspots, with BP one order of magnitude larger than SP. This is consistent with baroclinic instability being the main mechanism of eddy generation. The mean potential density field estimated from Argo floats shows that inside the hotspots, isopycnal slopes are steep, indicating availability of potential energy. The hotspots identified in this thesis overlap with previously identified regions of standing meanders. We provide evidence that hotspot locations can be explained by the combined effect of topography, standing meanders that enhance baroclinic instability, and availability of potential energy to generate eddies via baroclinic instabilities.
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.
Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe
2015-01-01
Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Comparison of reproducibility of natural head position using two methods.
Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik
2012-01-01
Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.
2014-01-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180
Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E
2014-04-01
Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.
Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin
2016-01-01
An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904
Asundi, Krishna; Johnson, Peter W; Dennerlein, Jack T
2012-01-01
To determine the number of direct measurements needed to obtain a representative estimate of typing force and wrist kinematics, continuous measures of keyboard reaction force and wrist joint angle were collected at the workstation of 22 office workers while they completed their own work over three days, six hours per day. Typing force and wrist kinematics during keyboard, mouse and idle activities were calculated for each hour of measurement along with variance in measurements between subjects and between day and hour within subjects. Variance in measurements between subjects was significantly greater than variance in measurements between days and hours within subjects. Therefore, we concluded a single, one-hour period of continuous measures is sufficient to identify differences in typing force and wrist kinematics between subjects. Within subjects, day and hour of measurement had a significant effect on some measures and thus should be accounted for when comparing measures within a subject. The dose response relationship between exposure to computer related biomechanical risk factors and musculoskeletal disorders is poorly understood due to the difficulty and cost of direct measures. This study demonstrates a single hour of direct continuous measures is sufficient to identify differences in wrist kinematics and typing force between individuals.
Factors affecting minimum push and pull forces of manual carts.
Al-Eisawi, K W; Kerk, C J; Congleton, J J; Amendola, A A; Jenkins, O C; Gaines, W
1999-06-01
The minimum forces needed to manually push or pull a 4-wheel cart of differing weights with similar wheel sizes from a stationary state were measured on four floor materials under different conditions of wheel width, diameter, and orientation. Cart load was increased from 0 to 181.4 kg in increments of 36.3 kg. The floor materials were smooth concrete, tile, asphalt, and industrial carpet. Two wheel widths were tested: 25 and 38 mm. Wheel diameters were 51, 102, and 153 mm. Wheel orientation was tested at four levels: F0R0 (all four wheels aligned in the forward direction), F0R90 (the two front wheels, the wheels furthest from the cart handle, aligned in the forward direction and the two rear wheels, the wheels closest to the cart handle, aligned at 90 degrees to the forward direction), F90R0 (the two front wheels aligned at 90 degrees to the forward direction and the two rear wheels aligned in the forward direction), and F90R90 (all four wheels aligned at 90 degrees to the forward direction). Wheel width did not have a significant effect on the minimum push/pull forces. The minimum push/pull forces were linearly proportional to cart weight, and inversely proportional to wheel diameter. The coefficients of rolling friction were estimated as 2.2, 2.4, 3.3, and 4.5 mm for hard rubber wheels rolling on smooth concrete, tile, asphalt, and industrial carpet floors, respectively. The effect of wheel orientation was not consistent over the tested conditions, but, in general, the smallest minimum push/pull forces were measured with all four wheels aligned in the forward direction, whereas the largest minimum push/pull forces were measured when all four wheels were aligned at 90 degrees to the forward direction. There was no significant difference between the push and pull forces when all four wheels were aligned in the forward direction.
NASA Astrophysics Data System (ADS)
Huang, Zong-Ying; Pu, Zu-Yin; Xiao, Chi-Jie; Xong, Qui-Gang; Fu, Sui-Yan; Xie, Lun; Shi, Quan-Qi; Cao, Jin-Bin; Liu, Zhen-Xing; Shen, Cao; Shi, Jian-Kui; Lu, Li; Wang, Nai-Quan; Chen, Tao; Fritz, T.; Glasmeier, K.-H.; Daly, P.; Reme, H.
2004-04-01
From 11:10 to 11:40UT on January 26, 2001 the four Cluster II spacecraft were located in the duskside high latitude regions of the magnetosheath and magnetosheath boundary layer (MSBL). During this time Interval the interplanetary magnetic field (IMF) had a negative Bz component. A detailed study on the multiple flux ropes (MFRs) observed in this period is conducted in this paper. It is found that: (1) The multiple flux ropes in the high latitude MSBL appeared quasi-periodically with a repeated time period of about 78s, which is much shorter than the averaged occurring period (about 8-11min) of the flux transfer events (FTEs) at the dayside magnetopause (MP). (2) All the flux ropes observed in this event had a strong core magnetic field. The axial orientation of the most flux ropes is found to lie in the direction of the minimum magnetic field variance; a few flux ropes had their axes lying in the direction of the middle magnetic field variance; while for the remainders their principle axes could not be determined by the method of Principal Axis Analysis (PAA). The reason that causes this complexity relys on the different trajectories of the spacecraft passing through the flux ropes. (3) Each flux rope had a good corresponding HT frame of reference in which it was in a quasi-steady state. All flux ropes moved along the surface of the MP in a similar direction indicating that these flux ropes all came from the dawnside low latitude. Their radial scale is 1-2RE, comparable to the normal diameter of FTEs observed atthe dayside MP. (4) The energetic ions originated from the magnetosphere flowed out to the magnetosheath on the whole, while the solar wind plasma flowed into the magnetosphere along the axis of the flux ropes. The flux ropes offered channels for the transport of the solar wind plasma into the magnetosphere and the escaping of the magnetospheric plasma into the interplanetary space. (5) Each event was accompanied by an enhanced reversal of the dusk-dawn electric field, which could be identified to be the convective electric field in nature.
Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland
Pluhowski, E.J.
1981-01-01
Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Response to selection while maximizing genetic variance in small populations.
Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E
2016-09-20
Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Is my study system good enough? A case study for identifying maternal effects.
Holand, Anna Marie; Steinsland, Ingelin
2016-06-01
In this paper, we demonstrate how simulation studies can be used to answer questions about identifiability and consequences of omitting effects from a model. The methodology is presented through a case study where identifiability of genetic and/or individual (environmental) maternal effects is explored. Our study system is a wild house sparrow ( Passer domesticus ) population with known pedigree. We fit pedigree-based (generalized) linear mixed models (animal models), with and without additive genetic and individual maternal effects, and use deviance information criterion (DIC) for choosing between these models. Pedigree and R-code for simulations are available. For this study system, the simulation studies show that only large maternal effects can be identified. The genetic maternal effect (and similar for individual maternal effect) has to be at least half of the total genetic variance to be identified. The consequences of omitting a maternal effect when it is present are explored. Our results indicate that the total (genetic and individual) variance are accounted for. When an individual (environmental) maternal effect is omitted from the model, this only influences the estimated (direct) individual (environmental) variance. When a genetic maternal effect is omitted from the model, both (direct) genetic and (direct) individual variance estimates are overestimated.
Flow convergence caused by a salinity minimum in a tidal channel
Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey
2006-01-01
Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
Plasma dynamics on current-carrying magnetic flux tubes
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1992-01-01
A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
The Evolution and Consequences of Sex-Specific Reproductive Variance
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130
Estimation of lipids and lean mass of migrating sandpipers
Skagen, Susan K.; Knopf, Fritz L.; Cade, Brian S.
1993-01-01
Estimation of lean mass and lipid levels in birds involves the derivation of predictive equations that relate morphological measurements and, more recently, total body electrical conductivity (TOBEC) indices to known lean and lipid masses. Using cross-validation techniques, we evaluated the ability of several published and new predictive equations to estimate lean and lipid mass of Semipalmated Sandpipers (Calidris pusilla) and White-rumped Sandpipers (C. fuscicollis). We also tested ideas of Morton et al. (1991), who stated that current statistical approaches to TOBEC methodology misrepresent precision in estimating body fat. Three published interspecific equations using TOBEC indices predicted lean and lipid masses of our sample of birds with average errors of 8-28% and 53-155%, respectively. A new two-species equation relating lean mass and TOBEC indices revealed average errors of 4.6% and 23.2% in predicting lean and lipid mass, respectively. New intraspecific equations that estimate lipid mass directly from body mass, morphological measurements, and TOBEC indices yielded about a 13% error in lipid estimates. Body mass and morphological measurements explained a substantial portion of the variance (about 90%) in fat mass of both species. Addition of TOBEC indices improved the predictive model more for the smaller than for the larger sandpiper. TOBEC indices explained an additional 7.8% and 2.6% of the variance in fat mass and reduced the minimum breadth of prediction intervals by 0.95 g (32%) and 0.39 g (13%) for Semipalmated and White-rumped Sandpipers, respectively. The breadth of prediction intervals for models used to predict fat levels of individual birds must be considered when interpreting the resultant lipid estimates.
Xue, Dan; Li, Chengfan; Liu, Qian
2015-06-01
In China, visibility condition has become an important issue that concerns both society and the scientific community. In order to study visibility characteristics and its influencing factors, visibility data, air pollutants, and meteorological data during the year 2013 were obtained over Shanghai. The temporal variation of atmospheric visibility was analyzed. The mean value of daily visibility of Shanghai was 19.1 km. Visibility exhibited an obvious seasonal cycle. The maximum and minimum visibility occurred in September and December with the values of 27.5 and 7.7 km, respectively. The relationships between the visibility and air pollutant data were calculated. The visibility had negative correlation with NO2, CO, PM2.5, PM10, and SO2 and weak positive correlation with O3. Meteorological data were clustered into four groups to reveal the joint contribution of meteorological variables to the daily average visibility. Usually, under the meteorological condition of high temperature and wind speed, the visibility of Shanghai reached about 25 km, while visibility decreased to 16 km under the weather type of low wind speed and temperature and high relative humid. Principle component analysis was also applied to identify the main cause of visibility variance. The results showed that the low visibility over Shanghai was mainly due to the high air pollution concentrations associated with low wind speed, which explained the total variance of 44.99 %. These results provide new knowledge for better understanding the variations of visibility and have direct implications to supply sound policy on visibility improvement in Shanghai.
Fractal dimension and the navigational information provided by natural scenes.
Shamsyeh Zahedi, Moosarreza; Zeil, Jochen
2018-01-01
Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
NASA Astrophysics Data System (ADS)
Li, Zhi; Jin, Jiming
2017-11-01
Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.
Combining the Hanning windowed interpolated FFT in both directions
NASA Astrophysics Data System (ADS)
Chen, Kui Fu; Li, Yan Feng
2008-06-01
The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.
A statistical study of ionopause perturbation and associated boundary wave formation at Venus.
NASA Astrophysics Data System (ADS)
Chong, G. S.; Pope, S. A.; Walker, S. N.; Zhang, T.; Balikhin, M. A.
2017-12-01
In contrast to Earth, Venus does not possess an intrinsic magnetic field. Hence the interaction between solar wind and Venus is significantly different when compared to Earth, even though these two planets were once considered similar. Within the induced magnetosphere and ionosphere of Venus, previous studies have shown the existence of ionospheric boundary waves. These structures may play an important role in the atmospheric evolution of Venus. By using Venus Express data, the crossings of the ionopause boundary are determined based on the observations of photoelectrons during 2011. Pulses of dropouts in the electron energy spectrometer were observed in 92 events, which suggests potential perturbations of the boundary. Minimum variance analysis of the 1Hz magnetic field data for the perturbations is conducted and used to confirm the occurrence of the boundary waves. Statistical analysis shows that they were propagating mainly in the ±VSO-Y direction in the polar north terminator region. The generation mechanisms of boundary waves and their evolution into the potential nonlinear regime are discussed and analysed.
Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao
2016-07-12
In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.
NASA Astrophysics Data System (ADS)
Asadizadeh, Mostafa; Moosavi, Mahdi; Hossaini, Mohammad Farouq; Masoumi, Hossein
2018-02-01
In this paper, a number of artificial rock specimens with two parallel (stepped and coplanar) non-persistent joints were subjected to direct shearing. The effects of bridge length ( L), bridge angle ( γ), joint roughness coefficient (JRC) and normal stress ( σ n) on shear strength and cracking process of non-persistent jointed rock were studied extensively. The experimental program was designed based on Taguchi method, and the validity of the resulting data was assessed using analysis of variance. The results revealed that σ n and γ have the maximum and minimum effects on shear strength, respectively. Also, increase in L from 10 to 60 mm led to decrease in shear strength where high level of JRC profile and σ n led to the initiation of tensile cracks due to asperity interlocking. Such tensile cracks are known as "interlocking cracks" which normally initiate from the asperity and then propagate toward the specimen boundaries. Finally, the cracking process of specimens was classified into three categories, namely tensile cracking, shear cracking and combination of tension and shear or mixed mode tensile-shear cracking.
Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells
2017-11-01
approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of
2017-12-01
carefully to ensure only minimum information needed for effective management control is requested. Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey
2004-05-10
aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well
NASA Astrophysics Data System (ADS)
Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang
2018-03-01
This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Demographics of an ornate box turtle population experiencing minimal human-induced disturbances
Converse, S.J.; Iverson, J.B.; Savidge, J.A.
2005-01-01
Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.
Possibility of modifying the growth trajectory in Raeini Cashmere goat.
Ghiasi, Heydar; Mokhtari, M S
2018-03-27
The objective of this study was to investigate the possibility of modifying the growth trajectory in Raeini Cashmere goat breed. In total, 13,193 records on live body weight collected from 4788 Raeini Cashmere goats were used. According to Akanke's information criterion (AIC), the sing-trait random regression model included fourth-order Legendre polynomial for direct and maternal genetic effect; maternal and individual permanent environmental effect was the best model for estimating (co)variance components. The matrices of eigenvectors for (co)variances between random regression coefficients of direct additive genetic were used to calculate eigenfunctions, and different eigenvector indices were also constructed. The obtained results showed that the first eigenvalue explained 79.90% of total genetic variance. Therefore, changing the body weights applying the first eigenfunction will be obtained rapidly. Selection based on the first eigenvector will cause favorable positive genetic gains for all body weight considered from birth to 12 months of age. For modifying the growth trajectory in Raeini Cashmere goat, the selection should be based on the second eigenfunction. The second eigenvalue accounted for 14.41% of total genetic variance for body weights that is low in comparison with genetic variance explained by the first eigenvalue. The complex patterns of genetic change in growth trajectory observed under the third and fourth eigenfunction and low amount of genetic variance explained by the third and fourth eigenvalues.
Point sensitive NMR imaging system using a magnetic field configuration with a spatial minimum
Eberhard, P.H.
A point-sensitive NMR imaging system in which a main solenoid coil produces a relatively strong and substantially uniform magnetic field and a pair of perturbing coils powered by current in the same direction superimposes a pair of relatively weak perturbing fields on the main field to produce a resultant point of minimum field strength at a desired location in a direction along the Z-axis. Two other pairs of perturbing coils superimpose relatively weak field gradients on the main field in directions along the X- and Y-axes to locate the minimum field point at a desired location in a plane normal to the Z-axes. An rf generator irradiates a tissue specimen in the field with radio frequency energy so that desired nuclei in a small volume at the point of minimum field strength will resonate.
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Jensen's Inequality Predicts Effects of Environmental Variation
Jonathan J. Ruel; Matthew P. Ayres
1999-01-01
Many biologists now recognize that environmental variance can exert important effects on patterns and processes in nature that are independent of average conditions. Jenson's inequality is a mathematical proof that is seldom mentioned in the ecological literature but which provides a powerful tool for predicting some direct effects of environmental variance in...
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.
12 CFR 3.11 - Standards for determination of appropriate individual minimum capital ratios.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Standards for determination of appropriate individual minimum capital ratios. 3.11 Section 3.11 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital Ratios for an Individual Bank § 3.11 Standards...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital Ratios for an Individual Bank § 3.10 Applicability. The OCC may require higher minimum capital ratios for an individual bank in view of its circumstances. For example...
NASA Astrophysics Data System (ADS)
Sun, Xuelian; Liu, Zixian
2016-02-01
In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
NASA Astrophysics Data System (ADS)
Yan, Jiaqing; Wei, Yun; Wang, Yinghua; Xu, Gang; Li, Zheng; Li, Xiaoli
2015-04-01
Transcranial direct current stimulation (tDCS) is a noninvasive, safe and convenient neuro-modulatory technique in neurological rehabilitation, treatment, and other aspects of brain disorders. However, evaluating the effects of tDCS is still difficult. We aimed to evaluate the effects of tDCS using hemodynamic changes using functional near-infrared spectroscopy (fNIRS). Five healthy participants were employed and anodal tDCS was applied to the left motor-related cortex, with cathodes positioned on the right dorsolateral supraorbital area. fNIRS data were collected from the right motor-related area at the same time. Functional connectivity (FC) between intracortical regions was calculated between fNIRS channels using a minimum variance distortion-less response magnitude squared coherence (MVDR-MSC) method. The levels of Oxy-HbO change and the FC between channels during the prestimulation, stimulation, and poststimulation stages were compared. Results showed no significant level difference, but the FC measured by MVDR-MSC significantly decreased during tDCS compared with pre-tDCS and post-tDCS, although the FC difference between pre-tDCS and post-tDCS was not significant. We conclude that coherence calculated from resting state fNIRS may be a useful tool for evaluating the effects of anodal tDCS and optimizing parameters for tDCS application.
More about arc-polarized structures in the solar wind
NASA Astrophysics Data System (ADS)
Haaland, S.; Sonnerup, B.; Paschmann, G.
2012-05-01
We report results from a Cluster-based study of the properties of 28 arc-polarized magnetic structures (also called rotational discontinuities) in the solar wind. These Alfvénic events were selected from the database created and analyzed by Knetter (2005) by use of criteria chosen to eliminate ambiguous cases. His studies showed that standard, four-spacecraft timing analysis in most cases lacks sufficient accuracy to identify the small normal magnetic field components expected to accompany such structures, leaving unanswered the question of their existence. Our study aims to break this impasse. By careful application of minimum variance analysis of the magnetic field (MVAB) from each individual spacecraft, we show that, in most cases, a small but significantly non-zero magnetic field component was present in the direction perpendicular to the discontinuity. In the very few cases where this component was found to be large, examination revealed that MVAB had produced an unusual and unexplained orientation of the normal vector. On the whole, MVAB shows that many verifiable rotational discontinuities (Bn ≠ 0) exist in the solar wind and that their eigenvalue ratio (EVR = intermediate/minimum variance) can be extremely large (up to EVR = 400). Each of our events comprises four individual spacecraft crossings. The events include 17 ion-polarized cases and 11 electron-polarized ones. Fifteen of the ion events have widths ranging from 9 to 21 ion inertial lengths, with two outliers at 46 and 54. The electron-polarized events are generally thicker: nine cases fall in the range 20-71 ion inertial lengths, with two outliers at 9 and 13. In agreement with theoretical predictions from a one-dimensional, ideal, Hall-MHD description (Sonnerup et al., 2010), the ion-polarized events show a small depression in field magnitude, while the electron-polarized ones tend to show a small enhancement. This effect was also predicted by Wu and Lee (2000). Judging only from the sense of the plasma flow across our DDs, their propagation appears to be sunward as often as anti-sunward. However, we argue that this result can be misleading as a consequence of the possible presence of magnetic islands within the DDs. How the rotational discontinuities come into existence, how they evolve with time, and what roles they play in the solar wind remain open questions.
ERIC Educational Resources Information Center
Krus, David J.; Krus, Patricia H.
1978-01-01
The conceptual differences between coded regression analysis and traditional analysis of variance are discussed. Also, a modification of several SPSS routines is proposed which allows for direct interpretation of ANOVA and ANCOVA results in a form stressing the strength and significance of scrutinized relationships. (Author)
Navigator alignment using radar scan
Doerry, Armin W.; Marquette, Brandeis
2016-04-05
The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.
Shang, Zhi-Yuan; Wang, Jian; Zhang, Wen; Li, Yan-Yan; Cui, Ming-Xing; Chen, Zhen-Ju; Zhao, Xing-Yun
2013-01-01
A measurement was made on the vertical direction tree ring stable carbon isotope ratio (delta13C) and tree ring width of Pinus sylvestris var. mongolica in northern Daxing' an Mountains of Northeast China, with the relationship between the vertical direction variations of the tree ring delta13C and tree ring width analyzed. In the whole ring of xylem, earlywood (EW) and bark endodermis, the delta13C all exhibited an increasing trend from the top to the base at first, with the maximum at the bottom of tree crown, and then, decreased rapidly to the minimum downward. The EW and late-wood (LW) had an increasing ratio of average tree ring width from the base to the top. The average annual sequence of the delta13C in vertical direction had an obvious reverse correspondence with the average annual sequence of tree ring width, and had a trend comparatively in line with the average annual sequence of the tree ring width ratio of EW to LW above tree crown. The variance analysis showed that there existed significant differences in the sequences of tree ring delta13C and ring width in vertical direction, and the magnitude of vertical delta13C variability was basically the same as that of the inter-annual delta13C variability. The year-to-year variation trend of the vertical delta13C sequence was approximately identical. For each sample, the delta13C sequence at the same heights was negatively correlated with the ring width sequence, but the statistical significance differed with tree height.
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Point sensitive NMR imaging system using a magnetic field configuration with a spatial minimum
Eberhard, Philippe H.
1985-01-01
A point-sensitive NMR imaging system (10) in which a main solenoid coil (11) produces a relatively strong and substantially uniform magnetic field and a pair of perturbing coils (PZ1 and PZ2) powered by current in the same direction superimposes a pair of relatively weak perturbing fields on the main field to produce a resultant point of minimum field strength at a desired location in a direction along the Z-axis. Two other pairs of perturbing coils (PX1, PX2; PY1, PY2) superimpose relatively weak field gradients on the main field in directions along the X- and Y-axes to locate the minimum field point at a desired location in a plane normal to the Z-axes. An RF generator (22) irradiates a tissue specimen in the field with radio frequency energy so that desired nuclei in a small volume at the point of minimum field strength will resonate.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Loth, K A; Friend, S; Horning, M L; Neumark-Sztainer, D; Fulkerson, J A
2016-12-01
This study examines associations between an expanded conceptualization of food-related parenting practices, specifically, directive and non-directive control, and child weight (BMI z-score) and dietary outcomes [Healthy Eating Index (HEI) 2010, daily servings fruits/vegetables] within a sample of parent-child dyads (8-12 years old; n = 160). Baseline data from the Healthy Home Offerings via the Mealtime Environment (HOME Plus) randomized controlled trial was used to test associations between directive and non-directive control and child dietary outcomes and weight using multiple regression analyses adjusted for parental education. Overall variance explained by directive and non-directive control constructs was also calculated. Markers of directive control included pressure-to-eat and food restriction, assessed using subscales from the Child Feeding Questionnaire; markers of non-directive control were assessed with a parental role modeling scale and a home food availability inventory in which an obesogenic home food environment score was assigned based on the types and number of unhealthful foods available within the child's home food environment. Food restriction and pressure-to-eat were positively and negatively associated with BMI z-scores, respectively, but not with dietary outcomes. An obesogenic home food environment was inversely associated with both dietary outcomes; parental role modeling of healthful eating was positively associated with both dietary outcomes. Neither non-directive behavioral construct was significantly associated with BMI z-scores. Greater total variance in BMI-z was explained by directive control; greater total variance in dietary outcomes was explained by non-directive control. Including a construct of food-related parenting practices with separate markers for directive and non-directive control should be considered for future research. These concepts address different forms of parental control and, in the present study, yielded unique associations with child dietary and weight outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Milosavljevic, Stephan; McBride, David I; Bagheri, Nasser; Vasiljev, Radivoj M; Mani, Ramakrishnan; Carman, Allan B; Rehn, Borje
2011-04-01
The purpose of this study was to determine exposure to whole-body vibration (WBV) and mechanical shock in rural workers who use quad bikes and to explore how personal, physical, and workplace characteristics influence exposure. A seat pad mounted triaxial accelerometer and data logger recorded full workday vibration and shock data from 130 New Zealand rural workers. Personal, physical, and workplace characteristics were gathered using a modified version of the Whole Body Vibration Health Surveillance Questionnaire. WBVs and mechanical shocks were analysed in accordance with the International Standardization for Organization (ISO 2631-1 and ISO 2631-5) standards and are presented as vibration dose value (VDV) and mechanical shock (S(ed)) exposures. VDV(Z) consistently exceeded European Union (Guide to good practice on whole body vibration. Directive 2002/44/EC on minimum health and safety, European Commission Directorate General Employment, Social Affairs and Equal Opportunities. 2006) guideline exposure action thresholds with some workers exceeding exposure limit thresholds. Exposure to mechanical shock was also evident. Increasing age had the strongest (negative) association with vibration and shock exposure with body mass index (BMI) having a similar but weaker effect. Age, daily driving duration, dairy farming, and use of two rear shock absorbers created the strongest multivariate model explaining 33% of variance in VDV(Z). Only age and dairy farming combined to explain 17% of the variance for daily mechanical shock. Twelve-month prevalence for low back pain was highest at 57.7% and lowest for upper back pain (13.8%). Personal (age and BMI), physical (shock absorbers and velocity), and workplace characteristics (driving duration and dairy farming) suggest that a mix of engineered workplace and behavioural interventions is required to reduce this level of exposure to vibration and shock.
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Estimation of stable boundary-layer height using variance processing of backscatter lidar data
NASA Astrophysics Data System (ADS)
Saeed, Umar; Rocadenbosch, Francesc
2017-04-01
Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.
NASA Astrophysics Data System (ADS)
Vech, Daniel; Chen, Christopher
2016-04-01
One of the most important features of the plasma turbulence is the anisotropy, which arises due to the presence of the magnetic field. The understanding of the anisotropy is particularly important to reveal how the turbulent cascade operates. It is well known that anisotropy exists with respect to the mean magnetic field, however recent theoretical studies suggested anisotropy with respect to the radial direction. The purpose of this study is to investigate the variance and spectral anisotropies of the solar wind turbulence with multiple point spacecraft observations. The study includes the Advanced Composition Analyzer (ACE), WIND and Cluster spacecraft data. The second order structure functions are derived for two different spacecraft configurations: when the pair of spacecraft are separated radially (with respect to the spacecraft -Sun line) and when they are separated along the transverse direction. We analyze the effect of the different sampling directions on the variance anisotropy, global spectral anisotropy, local 3D spectral anisotropy and discuss the implications for our understanding of solar wind turbulence.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji
2017-01-01
The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
The minimum control authority of a system of actuators with applications to Gravity Probe-B
NASA Technical Reports Server (NTRS)
Wiktor, Peter; Debra, Dan
1991-01-01
The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.
Duan, Wenjie
2015-01-01
Objective. Relationship, vitality, and conscientiousness are three fundamental virtues that have been recently identified as important individual differences to health, well being, and positive development. This cross-sectional study attempted to explore the relationship between the three constructs and post-traumatic growth (PTG) in three directions, including indirect trauma samples without post-traumatic stress disorder (PTSD), direct trauma samples without PTSD, and direct trauma samples with PTSD. Methods. A total of 340 community participants from Sichuan Province, Mainland China involved in the study, most of which experienced Wenchuan and Lushan Earthquake. Participants were required to complete the self-reported questionnaire packages at one time point for obtaining their scores on virtues (Chinese Virtues Questionnaire), PTSD (PTSD Checklist-Specific), and PTG (Post-traumatic Growth Inventory-Chinese). Results. Significant and positive correlations between the three virtues and PTG were identified (r = .39–.56; p < .01). Further regression analysis by stepwise method reveled that: in the indirect trauma samples, vitality explained 32% variance of PTG. In reference to the direct trauma sample without PTSD, both relationship and conscientiousness explained 32% variance of PTG, whereas in the direct trauma sample with PTSD, only conscientiousness accounted for 31% the variance in PTG. Conclusion.This cross-sectional investigation partly revealed the roles of different virtues in trauma context. Findings suggest important implications for strengths-based treatment. PMID:25870774
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Wheelwright, Nathaniel T; Keller, Lukas F; Postma, Erik
2014-11-01
The heritability (h(2) ) of fitness traits is often low. Although this has been attributed to directional selection having eroded genetic variation in direct proportion to the strength of selection, heritability does not necessarily reflect a trait's additive genetic variance and evolutionary potential ("evolvability"). Recent studies suggest that the low h(2) of fitness traits in wild populations is caused not by a paucity of additive genetic variance (VA ) but by greater environmental or nonadditive genetic variance (VR ). We examined the relationship between h(2) and variance-standardized selection intensities (i or βσ ), and between evolvability (IA :VA divided by squared phenotypic trait mean) and mean-standardized selection gradients (βμ ). Using 24 years of data from an island population of Savannah sparrows, we show that, across diverse traits, h(2) declines with the strength of selection, whereas IA and IR (VR divided by squared trait mean) are independent of the strength of selection. Within trait types (morphological, reproductive, life-history), h(2) , IA , and IR are all independent of the strength of selection. This indicates that certain traits have low heritability because of increased residual variance due to the age at which they are expressed or the multiple factors influencing their expression, rather than their association with fitness. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... directly to the out-of-doors. The minimum total window or skylight area, including windows in doors, shall... percent of the minimum window or skylight area required, except where comparably adequate ventilation is...
Luan, Sheng; Luo, Kun; Chai, Zhan; Cao, Baoxiang; Meng, Xianhong; Lu, Xia; Liu, Ning; Xu, Shengyu; Kong, Jie
2015-12-14
Our aim was to estimate the genetic parameters for the direct genetic effect (DGE) and indirect genetic effects (IGE) on adult body weight in the Pacific white shrimp. IGE is the heritable effect of an individual on the trait values of its group mates. To examine IGE on body weight, 4725 shrimp from 105 tagged families were tested in multiple small test groups (MSTG). Each family was separated into three groups (15 shrimp per group) that were randomly assigned to 105 concrete tanks with shrimp from two other families. To estimate breeding values, one large test group (OLTG) in a 300 m(2) circular concrete tank was used for the communal rearing of 8398 individuals from 105 families. Body weight was measured after a growth-test period of more than 200 days. Variance components for body weight in the MSTG programs were estimated using an animal model excluding or including IGE whereas variance components in the OLTG programs were estimated using a conventional animal model that included only DGE. The correlation of DGE between MSTG and OLTG programs was estimated by a two-trait animal model that included or excluded IGE. Heritability estimates for body weight from the conventional animal model in MSTG and OLTG programs were 0.26 ± 0.13 and 0.40 ± 0.06, respectively. The log likelihood ratio test revealed significant IGE on body weight. Total heritable variance was the sum of direct genetic variance (43.5%), direct-indirect genetic covariance (2.1%), and indirect genetic variance (54.4%). It represented 73% of the phenotypic variance and was more than two-fold greater than that (32%) obtained by using a classical heritability model for body weight. Correlations of DGE on body weight between MSTG and OLTG programs were intermediate regardless of whether IGE were included or not in the model. Our results suggest that social interactions contributed to a large part of the heritable variation in body weight. Small and non-significant direct-indirect genetic correlations implied that neutral or slightly cooperative heritable interactions, rather than competition, were dominant in this population but this may be due to the low rearing density.
40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...
VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)
NASA Astrophysics Data System (ADS)
Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.
2013-11-01
We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Sink fast and swim harder! Round-trip cost-of-transport for buoyant divers.
Miller, Patrick J O; Biuw, Martin; Watanabe, Yuuki Y; Thompson, Dave; Fedak, Mike A
2012-10-15
Efficient locomotion between prey resources at depth and oxygen at the surface is crucial for breath-hold divers to maximize time spent in the foraging layer, and thereby net energy intake rates. The body density of divers, which changes with body condition, determines the apparent weight (buoyancy) of divers, which may affect round-trip cost-of-transport (COT) between the surface and depth. We evaluated alternative predictions from external-work and actuator-disc theory of how non-neutral buoyancy affects round-trip COT to depth, and the minimum COT speed for steady-state vertical transit. Not surprisingly, the models predict that one-way COT decreases (increases) when buoyancy aids (hinders) one-way transit. At extreme deviations from neutral buoyancy, gliding at terminal velocity is the minimum COT strategy in the direction aided by buoyancy. In the transit direction hindered by buoyancy, the external-work model predicted that minimum COT speeds would not change at greater deviations from neutral buoyancy, but minimum COT speeds were predicted to increase under the actuator disc model. As previously documented for grey seals, we found that vertical transit rates of 36 elephant seals increased in both directions as body density deviated from neutral buoyancy, indicating that actuator disc theory may more closely predict the power requirements of divers affected by gravity than an external work model. For both models, minor deviations from neutral buoyancy did not affect minimum COT speed or round-trip COT itself. However, at body-density extremes, both models predict that savings in the aided direction do not fully offset the increased COT imposed by the greater thrusting required in the hindered direction.
Mixing in the shear superposition micromixer: three-dimensional analysis.
Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline
2004-05-15
In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.
Polarized-interferometer feasibility study
NASA Technical Reports Server (NTRS)
Raab, F. H.
1983-01-01
The feasibility of using a polarized-interferometer system as a rendezvous and docking sensor for two cooperating spacecraft was studied. The polarized interferometer is a radio frequency system for long range, real time determination of relative position and attitude. Range is determined by round trip signal timing. Direction is determined by radio interferometry. Relative roll is determined from signal polarization. Each spacecraft is equipped with a transponder and an antenna array. The antenna arrays consist of four crossed dipoles that can transmit or receive either circularly or linearly polarized signals. The active spacecraft is equipped with a sophisticated transponder and makes all measurements. The transponder on the passive spacecraft is a relatively simple repeater. An initialization algorithm is developed to estimate position and attitude without any a priori information. A tracking algorithm based upon minimum variance linear estimators is also developed. Techniques to simplify the transponder on the passive spacecraft are investigated and a suitable configuration is determined. A multiple carrier CW signal format is selected. The dependence of range accuracy and ambiguity resolution error probability are derived and used to design a candidate system. The validity of the design and the feasibility of the polarized interferometer concept are verified by simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasetyo, Retno Agung, E-mail: prasetyo.agung@bmkg.go.id; Heryandoko, Nova; Afnimar
The source mechanism of earthquake on July 2, 2013 was investigated by using moment tensor inversion. The result also compared by the field observation. Five waveform data of BMKG’s seismic network used to estimate the mechanism of earthquake, namely ; KCSI, MLSI, LASI, TPTI and SNSI. Main shock data taken during 200 seconds and filtered by using Butterworth band pass method from 0.03 to 0.05 Hz of frequency. Moment tensor inversion method is applied based on the point source assumption. Furthermore, the Green function calculated using the extended reflectivity method which modified by Kohketsu. The inversion result showed a strike-slipmore » faulting, where the nodal plane strike/dip/rake (124/80.6/152.8) and minimum variance value 0.3285 at a depth of 6 km (centroid). It categorized as a shallow earthquake. Field observation indicated that the building orientated to the east. It can be related to the southwest of dip direction which has 152 degrees of slip. As conclusion, the Pressure (P) and Tension (T) axis described dominant compression is happen from the south which is caused by pressure of the Indo-Australian plate.« less
Inheritance of resistance to acrinathrin in Frankliniella occidentalis (Thysanoptera: Thripidae).
Bielza, Pablo; Quinto, Vicente; Fernández, Esther; Grávalos, Carolina; Abellán, Jaime; Cifuentes, Dina
2008-05-01
The western flower thrips (WFT), Frankliniella occidentalis (Pergande), is an economically important pest. The genetic basis of acrinathrin resistance was investigated in WFT. The resistant strain, selected in the laboratory for acrinathrin resistance from a pool of thrips populations collected in Almeria (south-eastern Spain), showed a high resistance to acrinathrin (43-fold based on LC(50) values) compared with the laboratory susceptible strain. Mortality data from reciprocal crosses of resistant and susceptible thrips indicated that resistance was autosomal and not influenced by maternal effects. Analysis of probit lines from the parental strains and reciprocal crosses showed that resistance was expressed as a codominant trait. To determine the number of genes involved, a direct test of monogenic inheritance based on the backcrosses suggested that resistance to acrinathrin was probably controlled by one locus. Another approach, which was based on phenotypic variances, showed n(E), or the minimum number of freely segregating genetic factors for the resistant strain, to be 0.79. The results showed that acrinathrin resistance in WFT was autosomal and not influenced by maternal effects, and was expressed as a codominant trait, probably controlled by one locus. Copyright (c) 2008 Society of Chemical Industry.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
I, Satish Kumar; C, Vijaya Kumar; G, Gangaraju; Nath, Sapna; A K, Thiruvenkadan
2017-10-01
In the present study, (co)variance components and genetic parameters in Nellore sheep were obtained by restricted maximum likelihood (REML) method using six different animal models with various combinations of direct and maternal genetic effects for birth weight (BW), weaning weight (WW), 6-month weight (6MW), 9-month weight (9MW) and 12-month weight (YW). Evaluated records of 2075 lambs descended from 69 sires and 478 dams over a period of 8 years (2007-2014) were collected from the Livestock Research Station, Palamaner, India. Lambing year, sex of lamb, season of lambing and parity of dam were the fixed effects in the model, and ewe weight was used as a covariate. Best model for each trait was determined by log-likelihood ratio test. Direct heritability for BW, WW, 6MW, 9MW and YW were 0.08, 0.03, 0.12, 0.16 and 0.10, respectively, and their corresponding maternal heritabilities were 0.07, 0.10, 0.09, 0.08 and 0.11. The proportions of maternal permanent environment variance to phenotypic variance (Pe 2 ) were 0.07, 0.10, 0.07, 0.06 and 0.10 for BW, WW, 6MW, 9MW and YW, respectively. The estimates of direct genetic correlations among the growth traits were positive and ranged from 0.44(BW-WW) to 0.96(YW-9MW), and the estimates of phenotypic and environmental correlations were found to be lower than those of genetic correlations. Exclusion of maternal effects in the model resulted in biased estimates of genetic parameters in Nellore sheep. Hence, to implement optimum breeding strategies for improvement of traits in Nellore sheep, maternal effects should be considered.
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
NASA Astrophysics Data System (ADS)
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Carpenter, Iain; Perry, Michelle; Challis, David; Hope, Kevin
2003-05-01
to determine if a combination of Minimum Data Set/Resident Assessment Instrument (MDS/RAI) assessment variables and the Resource Utilisation Groups version III (RUG-III) case-mix system could be used as a method of identifying and reimbursing registered nursing care needs in long-term care. the sample included 193 nursing home residents from four nursing homes from three different locations and care providers in England. The study included assessments of residents' care needs using either the MDS/RAI assessments or RUG stand-alone questionnaires and a time study that recorded the amount of nursing time received by residents over a 24-h period. Validity of RUG-III for explaining the distribution of care time between residents in different RUG-III groups was tested. The difference in direct and indirect care provided by registered general nurses (RGN) and care assistants (CA) to residents in RUG-III clinical groups was compared. the RUG-III system explained 56% of the variance in care time (Eta2, P=0.0001). Residents in RUG-III groups associated with particular medical and nursing needs (enhanced RGN care) received more than twice as much indirect RGN care time (t-test, P<0.001) and 1.4 times as much direct RGN and direct CA time (t-test, P<0.01) than residents with primarily cognitive impairment or physical problems only (standard RGN care). Residents with enhanced RGN care received an average of 48.1 min of RGN care in 24 h (95% CI 4.1-55.2) compared with an average of 31.1 min (95% CI 26.8-35.5) for residents in the standard RGN care group. A third low RGN care group was created following publication of the Department of Health guidance on NHS Funded Nursing Care. With three levels, the enhanced care group receives about 38% more than the standard group, and the low group receives about 50% of the standard group. the RUG-III system effectively differentiated between nursing home residents who are receiving 'low', 'standard' and 'enhanced' RGN care time. The findings could provide the basis of a reimbursement system for registered nursing time in long-term care facilities in the UK.
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
14 CFR 135.225 - IFR: Takeoff, approach and landing minimums.
Code of Federal Regulations, 2011 CFR
2011-01-01
... that airport when used as an alternate airport, for each pilot in command of a turbine-powered airplane... the lowest straight-in landing minimums, unless otherwise restricted, if— (1) The wind direction and...
14 CFR 135.225 - IFR: Takeoff, approach and landing minimums.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that airport when used as an alternate airport, for each pilot in command of a turbine-powered airplane... the lowest straight-in landing minimums, unless otherwise restricted, if— (1) The wind direction and...
Long-Period Ground Motion due to Near-Shear Earthquake Ruptures
NASA Astrophysics Data System (ADS)
Koketsu, K.; Yokota, Y.; Hikima, K.
2010-12-01
Long-period ground motion has become an increasingly important consideration because of the recent rapid increase in the number of large-scale structures, such as high-rise buildings and large oil storage tanks. Large subduction-zone earthquakes and moderate to large crustal earthquakes can generate far-source long-period ground motions in distant sedimentary basins with the help of path effects. Near-fault long-period ground motions are generated, for the most part, by the source effects of forward rupture directivity (Koketsu and Miyake, 2008). This rupture directivity effect is the maximum in the direction of fault rupture when a rupture velocity is nearly equal to shear wave velocity around a source fault (Dunham and Archuleta, 2005). The near-shear rupture was found to occur during the 2008 Mw 7.9 Wenchuan earthquake at the eastern edge of the Tibetan plateau (Koketsu et al., 2010). The variance of waveform residuals in a joint inversion of teleseismic and strong motion data was the minimum when we adopted a rupture velocity of 2.8 km/s, which is close to the shear wave velocity of 2.6 km/s around the hypocenter. We also found near-shear rupture during the 2010 Mw 6.9 Yushu earthquake (Yokota et al., 2010). The optimum rupture velocity for an inversion of teleseismic data is 3.5 km/s, which is almost equal to the shear wave velocity around the hypocenter. Since, in addition, supershear rupture was found during the 2001 Mw 7.8 Central Kunlun earthquake (Bouchon and Vallee, 2003), such fast earthquake rupture can be a characteristic of the eastern Tibetan plateau. Huge damage in Yingxiu and Beichuan from the 2008 Wenchuan earthquake and damage heavier than expected in the county seat of Yushu from the medium-sized Yushu earthquake can be attributed to the maximum rupture directivity effect in the rupture direction due to near-shear earthquake ruptures.
Optimal cue integration in ants.
Wystrach, Antoine; Mangan, Michael; Webb, Barbara
2015-10-07
In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).
Directional selection effects on patterns of phenotypic (co)variation in wild populations
Patton, J. L.; Hubbe, A.; Marroig, G.
2016-01-01
Phenotypic (co)variation is a prerequisite for evolutionary change, and understanding how (co)variation evolves is of crucial importance to the biological sciences. Theoretical models predict that under directional selection, phenotypic (co)variation should evolve in step with the underlying adaptive landscape, increasing the degree of correlation among co-selected traits as well as the amount of genetic variance in the direction of selection. Whether either of these outcomes occurs in natural populations is an open question and thus an important gap in evolutionary theory. Here, we documented changes in the phenotypic (co)variation structure in two separate natural populations in each of two chipmunk species (Tamias alpinus and T. speciosus) undergoing directional selection. In populations where selection was strongest (those of T. alpinus), we observed changes, at least for one population, in phenotypic (co)variation that matched theoretical expectations, namely an increase of both phenotypic integration and (co)variance in the direction of selection and a re-alignment of the major axis of variation with the selection gradient. PMID:27881744
Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed
Balk, B.; Elder, K.; Baron, Jill S.
1998-01-01
Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff. In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado. Geostatistics and classical statistics were used to estimate SWE distribution across the watershed. Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances. Snow densities were spatially modeled through regression analysis. Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE. The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths. Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2016-09-30
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
Variability and Maintenance of Turbulence in the Very Stable Boundary Layer
NASA Astrophysics Data System (ADS)
Mahrt, Larry
2010-04-01
The relationship of turbulence quantities to mean flow quantities, such as the Richardson number, degenerates substantially for strong stability, at least in those studies that do not place restrictions on minimum turbulence or non-stationarity. This study examines the large variability of the turbulence for very stable conditions by analyzing four months of turbulence data from a site with short grass. Brief comparisons are made with three additional sites, one over short grass on flat terrain and two with tall vegetation in complex terrain. For very stable conditions, any dependence of the turbulence quantities on the mean wind speed or bulk Richardson number becomes masked by large scatter, as found in some previous studies. The large variability of the turbulence quantities is due to random variations and other physical influences not represented by the bulk Richardson number. There is no critical Richardson number above which the turbulence vanishes. For very stable conditions, the record-averaged vertical velocity variance and the drag coefficient increase with the strength of the submeso motions (wave motions, solitary waves, horizontal modes and numerous more complex signatures). The submeso motions are on time scales of minutes and not normally considered part of the mean flow. The generation of turbulence by such unpredictable motions appears to preclude universal similarity theory for predicting the surface stress for very stable conditions. Large variation of the stress direction with respect to the wind direction for the very stable regime is also examined. Needed additional work is noted.
ERIC Educational Resources Information Center
Koen, Joshua D.; Yonelinas, Andrew P.
2013-01-01
Koen and Yonelinas (2010) contrasted the recollection and encoding variability accounts of the finding that old items are associated with more variable memory strength than new items. The study indicated that (a) increasing encoding variability did not lead to increased measures of old item variance, and (b) old item variance was directly related…
ERIC Educational Resources Information Center
Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.
2006-01-01
The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…
ERIC Educational Resources Information Center
Burgess, Gregory C.; Gray, Jeremy R.; Conway, Andrew R. A.; Braver, Todd S.
2011-01-01
Fluid intelligence (gF) and working memory (WM) span predict success in demanding cognitive situations. Recent studies show that much of the variance in gF and WM span is shared, suggesting common neural mechanisms. This study provides a direct investigation of the degree to which shared variance in gF and WM span can be explained by neural…
NASA Astrophysics Data System (ADS)
Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay
2017-11-01
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T
2013-12-11
The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.
Aseptic minimum volume vitrification technique for porcine parthenogenetically activated blastocyst.
Lin, Lin; Yu, Yutao; Zhang, Xiuqing; Yang, Huanming; Bolund, Lars; Callesen, Henrik; Vajta, Gábor
2011-01-01
Minimum volume vitrification may provide extremely high cooling and warming rates if the sample and the surrounding medium contacts directly with the respective liquid nitrogen and warming medium. However, this direct contact may result in microbial contamination. In this work, an earlier aseptic technique was applied for minimum volume vitrification. After equilibration, samples were loaded on a plastic film, immersed rapidly into factory derived, filter-sterilized liquid nitrogen, and sealed into sterile, pre-cooled straws. At warming, the straw was cut, the filmstrip was immersed into a 39 degree C warming medium, and the sample was stepwise rehydrated. Cryosurvival rates of porcine blastocysts produced by parthenogenetical activation did not differ from control, vitrified blastocysts with Cryotop. This approach can be used for minimum volume vitrification methods and may be suitable to overcome the biological dangers and legal restrictions that hamper the application of open vitrification techniques.
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Atmospheric turbulence effects measured along horizontal-path optical retro-reflector links.
Mahon, Rita; Moore, Christopher I; Ferraro, Mike; Rabinovich, William S; Suite, Michele R
2012-09-01
The scintillation measured over close-to-ground retro-reflector links can be substantially enhanced due to the correlations experienced by both the direct and reflected echo beams. Experiments were carried out at China Lake, California, over a variety of ranges. The emphasis in this paper is on presenting the data from the 1.1 km retro-reflecting link that was operated for four consecutive days. The dependence of the measured irradiance flux variance on the solar fluence and on the temperature gradient above the ground is presented. The data are consistent with scintillation minima near sunrise and sunset, rising rapidly during the day and saturating at irradiance flux variances of ~10. Measured irradiance probability distributions of the retro-reflected beam are compared with standard probability density functions. The ratio of the irradiance flux variances on the retro-reflected to the direct, single-pass case is investigated with two data sets, one from a monostatic system and the other using an off-axis receiver system.
Panel flutter optimization by gradient projection
NASA Technical Reports Server (NTRS)
Pierson, B. L.
1975-01-01
A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.
Minimum-Time Consensus-Based Approach for Power System Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Sun, Yannan
2016-02-01
This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.
Optimizing conceptual aircraft designs for minimum life cycle cost
NASA Technical Reports Server (NTRS)
Johnson, Vicki S.
1989-01-01
A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence
NASA Astrophysics Data System (ADS)
Cheminet, Adam; Blanquart, Guillaume
2011-11-01
Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.
1986-11-01
mother and my brother. Their support and encouragement made this research exciting and enjoyable. I am grateful to my advisor, Professor H. Vincent Poor...the model. The m! M A variance of a random variable with density given by (A. 1) is a2 KmC 2 2A(I+l’)• (A.2) With the variance of the random variable
Michel, Jesse S; Clark, Malissa A
2013-10-01
This study examines the relative importance of individual differences in relation to perceptions of work-family conflict and facilitation, as well as the moderating role of boundary preference for segmentation on these relationships. Relative importance analyses, based on a diverse sample of 380 employees from the USA, revealed that individual differences were consistently predictive of self-reported work-family conflict and facilitation. Conscientiousness, neuroticism, negative affect and core self-evaluations were consistently related to both directions of work-family conflict, whereas agreeableness predicted significant variance in family-to-work conflict only. Positive affect and core self-evaluations were consistently related to both directions of work-family facilitation, whereas agreeableness and neuroticism predicted significant variance in family-to-work facilitation only. Collectively, individual differences explained 25-28% of the variance in work-family conflict (primarily predicted by neuroticism and negative affect) and 11-18% of the variance in work-family facilitation (primarily predicted by positive affect and core self-evaluations). Moderated regression analyses showed that boundary preference for segmentation strengthened many of the relationships between individual differences and work-family conflict and facilitation. Implications for addressing the nature of work and family are discussed. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Extreme interplanetary rotational discontinuities at 1 AU
NASA Astrophysics Data System (ADS)
Lepping, R. P.; Wu, C.-C.
2005-11-01
This study is concerned with the identification and description of a special subset of four Wind interplanetary rotational discontinuities (from an earlier study of 134 directional discontinuities by Lepping et al. (2003)) with some "extreme" characteristics, in the sense that every case has (1) an almost planar current sheet surface, (2) a very large discontinuity angle (ω), (3) at least moderately strong normal field components (>0.8 nT), and (4) the overall set has a very broad range of transition layer thicknesses, with one being as thick as 50 RE and another at the other extreme being 1.6 RE, most being much thicker than are usually studied. Each example has a well-determined surface normal (n) according to minimum variance analysis and corroborated via time delay checking of the discontinuity with observations at IMP 8 by employing the local surface planarity. From the variance analyses, most of these cases had unusually large ratios of intermediate-to-minimum eigenvalues (λI/λmin), being on average 32 for three cases (with a fourth being much larger), indicating compact current sheet transition zones, another (the fifth) extreme property. For many years there has been a controversy as to the relative distribution of rotational (RDs) to tangential discontinuities (TDs) in the solar wind at 1 AU (and elsewhere, such as between the Sun and Earth), even to the point where some authors have suggested that RDs with large ∣Bn∣s are probably not generated or, if generated, are unstable and therefore very rare. Some of this disagreement apparently has been due to the different selection criteria used, e.g., some allowed eigenvalue ratios (λI/λmin) to be almost an order of magnitude lower than 32 in estimating n, usually introducing unacceptable error in n and therefore also in ∣Bn∣. However, we suggest that RDs may not be so rare at 1 AU, but good quality cases (where ∣Bn∣ confidently exceeds the error in ∣Bn∣) appear to be uncommon, and further, cases of large ∣Bn∣ may indeed be rare. Finally, the issue of estimating the number of RDs-to-TDs was revisited using the full 134 events of the original Lepping et al. (2003) study (which utilized the RDs' propagation speeds for this estimation, an unconventional approach) but now by considering only normal field components, the more conventional approach. This resulted in slightly different conclusions, depending on specific assumptions used, making the unconventional approach suspect.
Minimum emittance in TBA and MBA lattices
NASA Astrophysics Data System (ADS)
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
DOT National Transportation Integrated Search
1997-02-01
This report contains a summary of the work performed during the development of a minimum performance standard for lavatory trash receptacle automatic fire extinguishers. The developmental work was performed under the direction of the International Ha...
42 CFR 86.31 - Eligibility; minimum requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
....31 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES GRANTS FOR EDUCATION PROGRAMS IN OCCUPATIONAL SAFETY AND HEALTH Occupational Safety and Health Direct Traineeships § 86.31 Eligibility; minimum requirements. In...
42 CFR 86.31 - Eligibility; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
....31 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES GRANTS FOR EDUCATION PROGRAMS IN OCCUPATIONAL SAFETY AND HEALTH Occupational Safety and Health Direct Traineeships § 86.31 Eligibility; minimum requirements. In...
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Searching for concentric low variance circles in the cosmic microwave background
NASA Astrophysics Data System (ADS)
DeAbreu, Adam; Contreras, Dagoberto; Scott, Douglas
2015-12-01
In a recent paper, Gurzadyan & Penrose claim to have found directions in the sky around which there are multiple concentric sets of annuli with anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular theory of the pre-Big Bang Universe. We are able to reproduce the analysis these authors presented for data from the WMAP satellite and we confirm the existence of these apparently special directions in the newer Planck data. However, we also find that these features are present at the same level of abundance in simulated Gaussian CMB skies, i.e., they are entirely consistent with the predictions of the standard cosmological model.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.
2012-01-01
The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.
Statistical indicators of collective behavior and functional clusters in gene networks of yeast
NASA Astrophysics Data System (ADS)
Živković, J.; Tadić, B.; Wick, N.; Thurner, S.
2006-03-01
We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.
Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Grimm, Robert E.; Phillips, Roger J.
1991-01-01
Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.
Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M
2014-01-01
In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
Claw length recommendations for dairy cow foot trimming
Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.
2015-01-01
The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848
Precision gravimetric survey at the conditions of urban agglomerations
NASA Astrophysics Data System (ADS)
Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander
2014-05-01
Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Directional selection effects on patterns of phenotypic (co)variation in wild populations.
Assis, A P A; Patton, J L; Hubbe, A; Marroig, G
2016-11-30
Phenotypic (co)variation is a prerequisite for evolutionary change, and understanding how (co)variation evolves is of crucial importance to the biological sciences. Theoretical models predict that under directional selection, phenotypic (co)variation should evolve in step with the underlying adaptive landscape, increasing the degree of correlation among co-selected traits as well as the amount of genetic variance in the direction of selection. Whether either of these outcomes occurs in natural populations is an open question and thus an important gap in evolutionary theory. Here, we documented changes in the phenotypic (co)variation structure in two separate natural populations in each of two chipmunk species (Tamias alpinus and T. speciosus) undergoing directional selection. In populations where selection was strongest (those of T. alpinus), we observed changes, at least for one population, in phenotypic (co)variation that matched theoretical expectations, namely an increase of both phenotypic integration and (co)variance in the direction of selection and a re-alignment of the major axis of variation with the selection gradient. © 2016 The Author(s).
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Code of Federal Regulations, 2012 CFR
2012-04-01
... habitable room (not including partitioned areas) shall have at least one windown or skylight opening directly to the out-of-doors. The minimum total window or skylight area, including windows in doors, shall... percent of the minimum window or skylight area required, except where comparably adequate ventilation is...
Code of Federal Regulations, 2013 CFR
2013-04-01
... habitable room (not including partitioned areas) shall have at least one windown or skylight opening directly to the out-of-doors. The minimum total window or skylight area, including windows in doors, shall... percent of the minimum window or skylight area required, except where comparably adequate ventilation is...
Knotter, Maartje H; Wissink, Inge B; Moonen, Xavier M H; Stams, Geert-Jan J M; Jansen, Gerard J
2013-05-01
Data were collected from 121 staff members (20 direct support staff teams) on background characteristics of the individual staff members and their teams (gender, age, years of work experience, position and education), the frequency and form of aggression of clients with an intellectual disability (verbal or physical), staff members' attitudes towards aggression, and the types of behavioural interventions they executed (providing personal space and behavioural boundary-setting, restricting freedom and the use of coercive measures). Additionally, client group characteristics (age of clients, type of care and client's level of intellectual disability) were assessed. Multilevel analyses (individual and contextual level) were performed to examine the relations between all studied variables and the behavioural interventions. The results showed that for providing personal space and behavioural boundary-setting as well as for restricting freedom, the proportion of variance explained by the context (staff team and client group characteristics) was three times larger than the proportion of variance explained by individual staff member characteristics. For using coercive measures, the context even accounted for 66% of the variance, whereas only 8% was explained by individual staff member characteristics. A negative attitude towards aggression of the direct support team as a whole proved to be an especially strong predictor of using coercive measures. To diminish the use of coercive measures, interventions should therefore be directed towards influencing the attitude of direct support teams instead of individual staff members. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimization of fixed-range trajectories for supersonic transport aircraft
NASA Astrophysics Data System (ADS)
Windhorst, Robert Dennis
1999-11-01
This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.
The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2005-01-01
The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
Sasikala, Wilbee D; Mukherjee, Arnab
2012-10-11
DNA intercalation, a biophysical process of enormous clinical significance, has surprisingly eluded molecular understanding for several decades. With appropriate configurational restraint (to prevent dissociation) in all-atom metadynamics simulations, we capture the free energy surface of direct intercalation from minor groove-bound state for the first time using an anticancer agent proflavine. Mechanism along the minimum free energy path reveals that intercalation happens through a minimum base stacking penalty pathway where nonstacking parameters (Twist→Slide/Shift) change first, followed by base stacking parameters (Buckle/Roll→Rise). This mechanism defies the natural fluctuation hypothesis and provides molecular evidence for the drug-induced cavity formation hypothesis. The thermodynamic origin of the barrier is found to be a combination of entropy and desolvation energy.
25 CFR 36.20 - Standard V-Minimum academic programs/school calendar.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., physical education, music, etc.) which are directly related to or affect student instruction shall provide....20 Section 36.20 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION MINIMUM ACADEMIC STANDARDS FOR THE BASIC EDUCATION OF INDIAN CHILDREN AND NATIONAL CRITERIA FOR DORMITORY...
Observed spatiotemporal variability of boundary-layer turbulence over flat, heterogeneous terrain
NASA Astrophysics Data System (ADS)
Maurer, V.; Kalthoff, N.; Wieser, A.; Kohler, M.; Mauder, M.; Gantner, L.
2016-02-01
In the spring of 2013, extensive measurements with multiple Doppler lidar systems were performed. The instruments were arranged in a triangle with edge lengths of about 3 km in a moderately flat, agriculturally used terrain in northwestern Germany. For 6 mostly cloud-free convective days, vertical velocity variance profiles were calculated. Weighted-averaged surface fluxes proved to be more appropriate than data from individual sites for scaling the variance profiles; but even then, the scatter of profiles was mostly larger than the statistical error. The scatter could not be explained by mean wind speed or stability, whereas time periods with significantly increased variance contained broader thermals. Periods with an elevated maximum of the variance profiles could also be related to broad thermals. Moreover, statistically significant spatial differences of variance were found. They were not influenced by the existing surface heterogeneity. Instead, thermals were preserved between two sites when the travel time was shorter than the large-eddy turnover time. At the same time, no thermals passed for more than 2 h at a third site that was located perpendicular to the mean wind direction in relation to the first two sites. Organized structures of turbulence with subsidence prevailing in the surroundings of thermals can thus partly explain significant spatial variance differences existing for several hours. Therefore, the representativeness of individual variance profiles derived from measurements at a single site cannot be assumed.
Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities
NASA Astrophysics Data System (ADS)
Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco
2017-03-01
Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.
Social comparison processes and catastrophising in fibromyalgia: A path analysis.
Cabrera-Perona, V; Buunk, A P; Terol-Cantero, M C; Quiles-Marcos, Y; Martín-Aragón, M
2017-06-01
In addition to coping strategies, social comparison may play a role in illness adjustment. However, little is known about the role of contrast and identification in social comparison in adaptation to fibromyalgia. To evaluate through a path analysis in a sample of fibromyalgia patients, the association between identification and contrast in social comparison, catastrophising and specific health outcomes (fibromyalgia illness impact and psychological distress). 131 Spanish fibromyalgia outpatients (mean age: 50.15, SD = 11.1) filled out a questionnaire. We present a model that explained 33% of the variance in catastrophising by direct effects of more use of upward contrast and downward identification. In addition, 35% of fibromyalgia illness impact variance was explained by less upward identification, more upward contrast and more catastrophising and 42% of the variance in psychological distress by a direct effect of more use of upward contrast together with higher fibromyalgia illness impact. We suggest that intervention programmes with chronic pain and fibromyalgia patients should focus on enhancing the use of upward identification in social comparison, and on minimising the use of upward contrast and downward identification in social comparison.
A Comparison of Detection and Tracking Methods as Applied to OPIR Optics
2014-12-01
foreground % images. The function requires you to input the scenes in vector format % as well as the window size, w. You can also set the variable...2,ii)) hold on end %% Build Scene A_org = A; % Format track data add variance, variance not included here...Kopp, “High energy laser directed energy weapons,” APA , Tech. Rep. APA - TR-2008–0501, Air Power Australia, Apr. 2012. [5] High-energy laser. (n.d
Searching for concentric low variance circles in the cosmic microwave background
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeAbreu, Adam; Contreras, Dagoberto; Scott, Douglas, E-mail: adeabreu@sfu.ca, E-mail: dagocont@phas.ubc.ca, E-mail: dscott@phas.ubc.ca
In a recent paper, Gurzadyan and Penrose claim to have found directions in the sky around which there are multiple concentric sets of annuli with anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular theory of the pre-Big Bang Universe. We are able to reproduce the analysis these authors presented for data from the WMAP satellite and we confirm the existence of these apparently special directions in the newer Planck data. However, we also find that these features are present at the same level of abundance in simulated Gaussian CMB skies,more » i.e., they are entirely consistent with the predictions of the standard cosmological model.« less
Settling of hot particles through turbulence
NASA Astrophysics Data System (ADS)
Coletti, Filippo; Frankel, Ari; Pouransari, Hadi; Mani, Ali
2014-11-01
Particle-laden flows in which the dispersed phase is not isothermal with the continuous phase are common in a wealth of natural and industrial setting. In this study we consider the case of inertial particles heated by thermal radiation while settling through a turbulent transparent gas. Particles much smaller than the minimum flow scales are considered. The particle Stokes number (based on the Kolmogorov time scale) and the nominal settling velocity (normalized by the root-mean-square fluid velocity fluctuation) are both of order unity. In the considered dilute and optically thin regime, each particle receives the same heat flux. Numerical simulations are performed in which the two-way coupling between dispersed and continuous phase is taken into account. The momentum and energy equations are solved in a triply periodic domain, resolving all spatial and temporal scales. While falling, the heated particles shed plumes of buoyant gas, modifying the turbulence structure and enhancing velocity fluctuations in the vertical direction. The radiative forcing does not affect preferential concentration (clustering of particles in low vorticity regions), but reduces preferential sweeping (particle sampling regions of downward fluid motion). Overall, the mean settling velocity varies slightly when heating the particles, while its variance is greatly increased. We gratefully acknowledges support from DOE PSAAP II program.
A Random Forest Approach to Predict the Spatial Distribution ...
Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF) model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxy)phenol) (TCS), in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies) and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high) for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC) (transport and fate proxy) was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry) and sand (transport and fate proxy) were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary), which was validated wi
Hu, Hong; Xu, Shanshan; Yuan, Yuan; Liu, Runna; Wang, Supin; Wan, Mingxi
2015-05-01
Cavitation is considered as the primary mechanism of soft tissue fragmentation (histotripsy) by pulsed high-intensity focused ultrasound. The residual cavitation bubbles have a dual influence on the histotripsy pulses: these serve as nuclei for easy generation of new cavitation, and act as strong scatterers causing energy "shadowing." To monitor the residual cavitation bubbles in histotripsy, an ultrafast active cavitation imaging method with relatively high signal-to-noise ratio and good spatial-temporal resolution was proposed in this paper, which combined plane wave transmission, minimum variance beamforming, and coherence factor weighting. The spatial-temporal evolutions of residual cavitation bubbles around a fluid-tissue interface in histotripsy under pulse duration (PD) of 10-40 μs and pulse repetition frequency (PRF) of 0.67-2 kHz were monitored by this method. The integrated bubble area curves inside the tissue interface were acquired from the bubble image sequence, and the formation process of histotripsy damage was estimated. It was observed that the histotripsy efficiency decreased with both longer PDs and higher PRFs. A direct relationship with a coefficient of 1.0365 between histotripsy lesion area and inner residual bubble area was found. These results can assist in monitoring and optimization of the histotripsy treatment further.
Study on Crystallographic Orientation Effect on Surface Generation of Aluminum in Nano-cutting
NASA Astrophysics Data System (ADS)
Xu, Feifei; Fang, Fengzhou; Zhu, Yuanqing; Zhang, Xiaodong
2017-04-01
The material characteristics such as size effect are one of the most important factors that could not be neglected in cutting the material at nanoscale. The effects of anisotropic nature of single crystal materials in nano-cutting are investigated employing the molecular dynamics simulation. Results show that the size effect of the plastic deformation is based on different plastic carriers, such as the twin, stacking faults, and dislocations. The minimum uncut chip thickness is dependent on cutting direction, where even a negative value is obtained when the cutting direction is {110}<001>. It also determines the material deformation and removal mechanism (e.g., shearing, extruding, and rubbing mechanism) with a decrease in uncut chip thickness. When material is deformed by shearing, the primary shearing zone expands from the stagnation point or the tip of stagnation zone. When a material is deformed by extruding and rubbing, the primary deformation zone almost parallels to the cutting direction and expands from the bottom of the cutting edge merging with the tertiary deformation zone. The generated surface quality relates to the crystallographic orientation and the minimum uncut chip thickness. The cutting directions of {110}<001>, {110}<1-10>, and {111}<1-10>, whose minimum uncut chip thickness is relatively small, have better surface qualities compared to the other cutting direction.
Study on Crystallographic Orientation Effect on Surface Generation of Aluminum in Nano-cutting.
Xu, Feifei; Fang, Fengzhou; Zhu, Yuanqing; Zhang, Xiaodong
2017-12-01
The material characteristics such as size effect are one of the most important factors that could not be neglected in cutting the material at nanoscale. The effects of anisotropic nature of single crystal materials in nano-cutting are investigated employing the molecular dynamics simulation. Results show that the size effect of the plastic deformation is based on different plastic carriers, such as the twin, stacking faults, and dislocations. The minimum uncut chip thickness is dependent on cutting direction, where even a negative value is obtained when the cutting direction is {110}<001>. It also determines the material deformation and removal mechanism (e.g., shearing, extruding, and rubbing mechanism) with a decrease in uncut chip thickness. When material is deformed by shearing, the primary shearing zone expands from the stagnation point or the tip of stagnation zone. When a material is deformed by extruding and rubbing, the primary deformation zone almost parallels to the cutting direction and expands from the bottom of the cutting edge merging with the tertiary deformation zone. The generated surface quality relates to the crystallographic orientation and the minimum uncut chip thickness. The cutting directions of {110}<001>, {110}<1-10>, and {111}<1-10>, whose minimum uncut chip thickness is relatively small, have better surface qualities compared to the other cutting direction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
47 CFR 74.536 - Directional antenna required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Directional antenna required. 74.536 Section 74... Auxiliary Stations § 74.536 Directional antenna required. (a) Aural broadcast STL and ICR stations are required to use a directional antenna with the minimum beamwidth necessary, consistent with good...
47 CFR 74.536 - Directional antenna required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Directional antenna required. 74.536 Section 74... Auxiliary Stations § 74.536 Directional antenna required. (a) Aural broadcast STL and ICR stations are required to use a directional antenna with the minimum beamwidth necessary, consistent with good...
47 CFR 74.536 - Directional antenna required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Directional antenna required. 74.536 Section 74... Auxiliary Stations § 74.536 Directional antenna required. (a) Aural broadcast STL and ICR stations are required to use a directional antenna with the minimum beamwidth necessary, consistent with good...
47 CFR 74.536 - Directional antenna required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false Directional antenna required. 74.536 Section 74... Auxiliary Stations § 74.536 Directional antenna required. (a) Aural broadcast STL and ICR stations are required to use a directional antenna with the minimum beamwidth necessary, consistent with good...
47 CFR 74.536 - Directional antenna required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false Directional antenna required. 74.536 Section 74... Auxiliary Stations § 74.536 Directional antenna required. (a) Aural broadcast STL and ICR stations are required to use a directional antenna with the minimum beamwidth necessary, consistent with good...
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Noakes, Matthew J; Wolf, Blair O; McKechnie, Andrew E
Avian metabolic responses demonstrate considerable diversity under fluctuating environmental conditions, a well-studied example being the seasonal upregulation of basal metabolic rate (BMR) and summit metabolism (M sum ) in temperate species experiencing harsh winters. Fewer studies have examined seasonal metabolic acclimatization in subtropical or tropical species. We investigated seasonal metabolic variation in an Afrotropical ploceid passerine, the white-browed sparrow-weaver (Plocepasser mahali; ∼47 g), at three sites along a climatic gradient of approximately 7°C in winter minimum air temperature (T a ). We measured M sum (n ≥ 10 per site per season) in a helox atmosphere, BMR of the same birds at thermoneutrality (T a ≈ 30°C), and resting metabolic rates at 5°C ≤ T a ≤ 20°C. Patterns of seasonal adjustments in BMR varied among populations in a manner not solely related to variation in seasonal T a extremes, ranging from BMR ∼52% higher in winter than in summer to no seasonal difference. Greater cold tolerance was found in a population at a colder desert site, manifested as higher M sum (∼25% higher) and lower helox temperature at cold limit values compared with a milder, mesic site. Our results lend support to the idea that greater variance in the pattern of seasonal metabolic responses occurs in subtropical and tropical species compared with their temperate-zone counterparts and that factors other than T a extremes (e.g., food availability) may be important in determining the magnitude and direction of seasonal metabolic adjustments in subtropical birds.
NASA Astrophysics Data System (ADS)
Lee, Juhyun; Im, Jungho; Park, Seohui; Yoo, Cheolhee
2017-04-01
Tropical cyclones are one of major natural disasters, which results in huge damages to human and society. Analyzing behaviors and characteristics of tropical cyclones is essential for mitigating the damages by tropical cyclones. In particular, it is important to keep track of the centers of tropical cyclones. Cyclone center and track information (called Best Track) provided by Joint Typhoon Warning Center (JTWC) are widely used for the reference data of tropical cyclone centers. However, JTWC uses multiple resources including numerical modeling, geostationary satellite data, and in situ measurements to determine the best track in a subjective way and makes it available to the public 6 months later after an event occurred. Thus, the best track data cannot be operationally used to identify the centers of tropical cyclones in real time. In this study, we proposed an automated approach for identifying the centers of tropical cyclones using only Communication, Ocean, and Meteorological Satellite (COMS) Meteorological Imager (MI) sensor derived data. It contains 5 bands—VIS (0.67µm), SWIR (3.7µm), WV (6.7µm), IR1 (10.8µm), and IR2 (12.0µm). We used IR1 band images to extract brightness temperatures of cloud tops over Western North Pacific between 2011 and 2012. The Angle deviation between brightness temperature-based gradient direction in a moving window and the reference angle toward the center of the window was extracted. Then, a spatial analysis index called circular variance was adopted to identify the centers of tropical cyclones based on the angle deviation. Finally, the locations of the minimum circular variance indexes were identified as the centers of tropical cyclones. While the proposed method has comparable performance for detecting cyclone centers in case of organized cloud convections when compared with the best track data, it identified the cyclone centers distant ( 2 degrees) from the best track centers for unorganized convections.
An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements
NASA Astrophysics Data System (ADS)
Kang, D.
2015-12-01
In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
Kim, Hee-Jong; Shin, Jeong-Hyeon; Han, Cheol E; Kim, Hee Jin; Na, Duk L; Seo, Sang Won; Seong, Joon-Kyung
2016-01-01
Cortical thinning patterns in Alzheimer's disease (AD) have been widely reported through conventional regional analysis. In addition, the coordinated variance of cortical thickness in different brain regions has been investigated both at the individual and group network levels. In this study, we aim to investigate network architectural characteristics of a structural covariance network (SCN) in AD, and further to show that the structural covariance connectivity becomes disorganized across the brain regions in AD, while the normal control (NC) subjects maintain more clustered and consistent coordination in cortical atrophy variations. We generated SCNs directly from T1-weighted MR images of individual patients using surface-based cortical thickness data, with structural connectivity defined as similarity in cortical thickness within different brain regions. Individual SCNs were constructed using morphometric data from the Samsung Medical Center (SMC) dataset. The structural covariance connectivity showed higher clustering than randomly generated networks, as well as similar minimum path lengths, indicating that the SCNs are "small world." There were significant difference between NC and AD group in characteristic path lengths (z = -2.97, p < 0.01) and small-worldness values (z = 4.05, p < 0.01). Clustering coefficients in AD was smaller than that of NC but there was no significant difference (z = 1.81, not significant). We further observed that the AD patients had significantly disrupted structural connectivity. We also show that the coordinated variance of cortical thickness is distributed more randomly from one region to other regions in AD patients when compared to NC subjects. Our proposed SCN may provide surface-based measures for understanding interaction between two brain regions with co-atrophy of the cerebral cortex due to normal aging or AD. We applied our method to the AD Neuroimaging Initiative (ADNI) data to show consistency in results with the SMC dataset.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Procedures. 3.12 Section 3.12 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Establishment of Minimum Capital Ratios for an Individual Bank § 3.12 Procedures. (a) Notice. When the OCC...
76 FR 34294 - Proposed Collection; Comment Request for Form 8827
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... 8827, Credit for Prior Year Minimum Tax-Corporations. DATES: Written comments should be received on or before August 12, 2011 to be assured of consideration. ADDRESSES: Direct all written comments to Yvette B....gov . SUPPLEMENTARY INFORMATION: Title: Credit for Prior Year Minimum Tax-Corporations. OMB Number...
Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies
Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne
2014-01-01
Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111
Microarchitecture and Bone Quality in the Human Calcaneus; Local Variations of Fabric Anisotropy
Souzanchi, M F; Palacio-Mancheno, P E; Borisov, Y; Cardoso, L; Cowin, SC
2012-01-01
The local variability of microarchitecture of human trabecular calcaneus bone is investigated using high resolution microCT scanning. The fabric tensor is employed as the measure of the microarchitecture of the pore structure of a porous medium. It is hypothesized that a fabric tensor-dependent poroelastic ultrasound approach will more effectively predict the data variance than will porosity alone. The specific aims of the present study are i) to quantify the morphology and local anisotropy of the calcaneus microarchitecture with respect to anatomical directions, ii) to determine the interdependence, or lack thereof, of microarchitecture parameters, fabric, and volumetric bone mineral density (vBMD), and iii) to determine the relative ability of vBMD and fabric measurements in evaluating the variance in ultrasound wave velocity measurements along orthogonal directions in the human calcaneus. Our results show that the microarchitecture in the analyzed regions of human calcanei is anisotropic, with a preferred alignment along the posterior-anterior direction. Strong correlation was found between most scalar architectural parameters and vBMD. However, no statistical correlation was found between vBMD and the fabric components, the measures of the pore microstructure orientation. Therefore, among the parameters usually considered for cancellous bone (i.e., classic histomorphometric parameters such as porosity, trabecular thickness, number and separation), only fabric components explain the data variance that cannot be explained by vBMD, a global mass measurement, which lacks the sensitivity and selectivity to distinguish osteoporotic from healthy subjects because it is insensitive to directional changes in bone architecture. This study demonstrates that a multi-directional, fabric-dependent poroelastic ultrasound approach has the capability of characterizing anisotropic bone properties (bone quality) beyond bone mass, and could help to better understand anisotropic changes in bone architecture using ultrasound. PMID:22807141
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Gong, Huili; Dai, Zhenxue
Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of K in alluvial fans.« less
Zhu, Lin; Gong, Huili; Dai, Zhenxue; ...
2017-02-03
Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of K in alluvial fans.« less
Constraining the local variance of H {sub 0} from directional analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bengaly, C.A.P. Jr., E-mail: carlosap@on.br
We evaluate the local variance of the Hubble Constant H {sub 0} with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H {sub 0} from standard candles ( H {sub 0} = 73.8±2.4 km s{sup -1} Mpc {sup -1}) with that of the Planck's Cosmic Microwave Background data ( H {sub 0} = 67.8 ± 0.9km s{sup -1} Mpc{sup -1}). We obtain that H {sub 0} ranges from 68.9±0.5 km s{sup -1} Mpc{sup -1}more » to 71.2±0.7 km s{sup -1} Mpc{sup -1} through the celestial sphere (1 σ uncertainty), implying a Hubble Constant maximal variance of δ H {sub 0} = (2.30±0.86) km s{sup -1} Mpc{sup -1} towards the ( l,b ) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H {sub 0} variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H {sub 0} value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H {sub 0} determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.« less
Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Ishii, Hiroaki
2009-01-01
This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.
Decentralized control of Markovian decision processes: Existence Sigma-admissable policies
NASA Technical Reports Server (NTRS)
Greenland, A.
1980-01-01
The problem of formulating and analyzing Markov decision models having decentralized information and decision patterns is examined. Included are basic examples as well as the mathematical preliminaries needed to understand Markov decision models and, further, to superimpose decentralized decision structures on them. The notion of a variance admissible policy for the model is introduced and it is proved that there exist (possibly nondeterministic) optional policies from the class of variance admissible policies. Directions for further research are explored.
Spatial correlation of probabilistic earthquake ground motion and loss
Wesson, R.L.; Perkins, D.M.
2001-01-01
Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.
Sleep and nutritional deprivation and performance of house officers.
Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R
1985-07-01
A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
A Multipath Mitigation Algorithm for vehicle with Smart Antenna
NASA Astrophysics Data System (ADS)
Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang
2018-01-01
In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.
12 CFR 567.4 - Capital directives.
Code of Federal Regulations, 2010 CFR
2010-01-01
... requirement, the leverage ratio requirement, the tangible capital requirement, or individual minimum capital... capital directive, it may become effective immediately. A capital directive shall remain in effect and... plan shall continue in full force and effect. (b) Relation to other administrative actions. The Office...
Covariance functions for body weight from birth to maturity in Nellore cows.
Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G
2010-03-01
The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.
Klein, Anke M; Kleinherenbrink, Annelies V; Simons, Carlijn; de Gier, Erwin; Klein, Steven; Allart, Esther; Bögels, Susan M; Becker, Eni S; Rinck, Mike
2012-09-01
Several information-processing models highlight the independent roles of controlled and automatic processes in explaining fearful behavior. Therefore, we investigated whether direct measures of controlled processes and indirect measures of automatic processes predict unique variance components of children's spider fear-related behavior. Seventy-seven children between 8 and 13 years performed an Affective Priming Task (APT) measuring associative bias, a pictorial version of the Emotional Stroop Task (EST) measuring attentional bias, filled out the Spider Anxiety and Disgust Screening for Children (SADS-C) in order to assess self-perceived fear, and took part in a Behavioral Assessment Test (BAT) to measure avoidance of spiders. The SADS-C, EST, and APT did not correlate with each other. Spider fear-related behavior was best explained by SADS-C, APT, and EST together; they explained 51% of the variance in BAT behavior. No children with clinical levels of spider phobia were tested. The direct and the different indirect measures did no correlate with each other. These results indicate that both direct and indirect measures are useful for predicting unique variance components of fear-related behavior in children. The lack of relations between direct and indirect measures may explain why some earlier studies did not find stronger color-naming interference or stronger fear associations in children with high levels of self-reported fear. It also suggests that children with high levels of spider-fearful behavior have different fear-related associations and display higher interference by spider stimuli than children with non-fearful behavior. Copyright © 2012 Elsevier Ltd. All rights reserved.
Generalizability of Scaling Gradients on Direct Behavior Ratings
ERIC Educational Resources Information Center
Chafouleas, Sandra M.; Christ, Theodore J.; Riley-Tillman, T. Chris
2009-01-01
Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons…
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Remedies. 3.14 Section 3.14 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Enforcement § 3.14 Remedies. A bank that does not have or maintain the minimum capital ratios applicable to it...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Remedies. 3.14 Section 3.14 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE OF DIRECTIVES Enforcement § 3.14 Remedies. A bank that does not have or maintain the minimum capital ratios applicable to it...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-25
... a minimum participation allocation of at least one (1) contract. Specifically, the proposal ensures that the DLMM will be allocated a minimum of one contract in situations where, due to the Exchange's... DLMM being allocated zero contracts. \\5\\ A ``Directed Order'' is an order entered into the System by an...
Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L
2017-10-01
Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.
Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja
2013-01-01
Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Reliability analysis of the objective structured clinical examination using generalizability theory.
Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián
2016-01-01
Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.
Hossain, Md Golam; Saw, Aik; Alam, Rashidul; Ohtsuki, Fumio; Kamarul, Tunku
2013-09-01
Cephalic index (CI), the ratio of head breadth to head length, is widely used to categorise human populations. The aim of this study was to access the impact of anthropometric measurements on the CI of male Japanese university students. This study included 1,215 male university students from Tokyo and Kyoto, selected using convenient sampling. Multiple regression analysis was used to determine the effect of anthropometric measurements on CI. The variance inflation factor (VIF) showed no evidence of a multicollinearity problem among independent variables. The coefficients of the regression line demonstrated a significant positive relationship between CI and minimum frontal breadth (p < 0.01), bizygomatic breadth (p < 0.01) and head height (p < 0.05), and a negative relationship between CI and morphological facial height (p < 0.01) and head circumference (p < 0.01). Moreover, the coefficient and odds ratio of logistic regression analysis showed a greater likelihood for minimum frontal breadth (p < 0.01) and bizygomatic breadth (p < 0.01) to predict round-headedness, and morphological facial height (p < 0.05) and head circumference (p < 0.01) to predict long-headedness. Stepwise regression analysis revealed bizygomatic breadth, head circumference, minimum frontal breadth, head height and morphological facial height to be the best predictor craniofacial measurements with respect to CI. The results suggest that most of the variables considered in this study appear to influence the CI of adult male Japanese students.
Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem
Schneider, Davod C.; Piatt, John F.
1986-01-01
The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.
34 CFR 668.10 - Direct assessment programs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of direct measures include projects, papers, examinations, presentations, performances, and... academic year in a direct assessment program is a period of instructional time that consists of a minimum...), or (c), as applicable, using the academic year determined in accordance with paragraph (a)(3)(i) of...
Thrust Direction Optimization: Satisfying Dawn's Attitude Agility Constraints
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2013-01-01
The science objective of NASA's Dawn Discovery mission is to explore the two largest members of the main asteroid belt, the giant asteroid Vesta and the dwarf planet Ceres. Dawn successfully completed its orbital mission at Vesta. The Dawn spacecraft has complex, difficult to quantify, and in some cases severe limitations on its attitude agility. The low-thrust transfers between science orbits at Vesta required very complex time varying thrust directions due to the strong and complex gravity and various science objectives. Traditional thrust design objectives (like minimum (Delta)V or minimum transfer time) often result in thrust direction time evolutions that can not be accommodated with the attitude control system available on Dawn. This paper presents several new optimal control objectives, collectively called thrust direction optimization that were developed and necessary to successfully navigate Dawn through all orbital transfers at Vesta.
Solar Control of Earth's Ionosphere: Observations from Solar Cycle 23
NASA Astrophysics Data System (ADS)
Doe, R. A.; Thayer, J. P.; Solomon, S. C.
2005-05-01
A nine year database of sunlit E-region electron density altitude profiles (Ne(z)) measured by the Sondrestrom ISR has been partitioned over a 30-bin parameter space of averaged 10.7 cm solar radio flux (F10.7) and solar zenith angle (χ) to investigate long-term solar and thermospheric variability, and to validate contemporary EUV photoionization models. A two stage filter, based on rejection of Ne(z) profiles with large Hall to Pedersen ratio, is used to minimize auroral contamination. Resultant filtered mean Ne(z) compares favorably with subauroral Ne measured for the same F10.7 and χ conditions at the Millstone Hill ISR. Mean Ne, as expected, increases with solar activity and decreases with large χ, and the variance around mean Ne is shown to be greatest at low F10.7 (solar minimum). ISR-derived mean Ne is compared with two EUV models: (1) a simple model without photoelectrons and based on the 5 -- 105 nm EUVAC model solar flux [Richards et al., 1994] and (2) the GLOW model [Solomon et al., 1988; Solomon and Abreu, 1989] suitably modified for inclusion of XUV spectral components and photoelectron flux. Across parameter space and for all altitudes, Model 2 provides a closer match to ISR mean Ne and suggests that the photoelectron and XUV enhancements are essential to replicate measured plasma densities below 150 km. Simulated Ne variance envelopes, given by perturbing the Model 2 neutral atmosphere input by the measured extremum in Ap, F10.7, and Te, are much narrower than ISR-derived geophysical variance envelopes. We thus conclude that long-term variability of the EUV spectra dominates over thermospheric variability and that EUV spectral variability is greatest at solar minimum. ISR -- model comparison also provides evidence for the emergence of an H (Lyman β) Ne feature at solar maximum. Richards, P. G., J. A. Fennelly, and D. G. Torr, EUVAC: A solar EUV flux model for aeronomic calculations, J. Geophys. Res., 99, 8981, 1994. Solomon, S. C., P. B. Hays, and V. J. Abreu, The auroral 6300 Å emission: Observations and Modeling, J. Geophys. Res., 93, 9867, 1988. Solomon, S. C. and V. J. Abreu, The 630 nm dayglow, J. Geophys. Res., 94, 6817, 1989.
Everything that you have ever been told about assessment center ratings is confounded.
Jackson, Duncan J R; Michaelides, George; Dewberry, Chris; Kim, Young-Jae
2016-07-01
Despite a substantial research literature on the influence of dimensions and exercises in assessment centers (ACs), the relative impact of these 2 sources of variance continues to raise uncertainties because of confounding. With confounded effects, it is not possible to establish the degree to which any 1 effect, including those related to exercises and dimensions, influences AC ratings. In the current study (N = 698) we used Bayesian generalizability theory to unconfound all of the possible effects contributing to variance in AC ratings. Our results show that ≤1.11% of the variance in AC ratings was directly attributable to behavioral dimensions, suggesting that dimension-related effects have no practical impact on the reliability of ACs. Even when taking aggregation level into consideration, effects related to general performance and exercises accounted for almost all of the reliable variance in AC ratings. The implications of these findings for recent dimension- and exercise-based perspectives on ACs are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mandible shape in hybrid mice.
Renaud, Sabrina; Alibert, Paul; Auffray, Jean-Christophe
2009-09-01
Hybridisation between closely related species is frequently seen as retarding evolutionary divergence and can also promote it by creating novel phenotypes due to new genetic combinations and developmental interactions. We therefore investigated how hybridisation affects the shape of the mouse mandible, a well-known feature in evo-devo studies. Parental groups corresponded to two strains of the European mouse sub-species Mus musculus domesticus and Mus musculus musculus. Parents and hybrids were bred in controlled conditions. The mandibles of F(1) hybrids are mostly intermediate between parental phenotypes as expected for a complex multigenic character. Nevertheless, a transgressive effect as well as an increased phenotypic variance characterise the hybrids. This suggests that hybridisation between the two subspecies could lead to a higher phenotypic variance due to complex interactions among the parental genomes including non-additive genetic effects. The major direction of variance is conserved, however, among hybrids and parent groups. Hybridisation may thus play a role in the production of original transgressive phenotypes occurring following pre-existing patterns of variance.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
NASA Astrophysics Data System (ADS)
Jerousek, Richard Gregory; Colwell, Josh; Hedman, Matthew M.; French, Richard G.; Marouf, Essam A.; Esposito, Larry; Nicholson, Philip D.
2017-10-01
The Cassini Ultraviolet Imaging Spectrograph (UVIS) and Visual and Infrared Mapping Spectrometer (VIMS) have measured ring optical depths over a wide range of viewing geometries at effective wavelengths of 0.15 μm and 2.9 μm respectively. Using Voyager S and X band radio occultations and the direct inversion of the forward scattered S band signal, Marouf et al. (1982), (1983), and Zebker et al. (1985) determined the power-law size distribution parameters assuming a minimum particle radius of 1 mm. Many further studies have also constrained aspects of the particle size distribution throughout the main rings. Marouf et al. (2008a) determined the smallest ring particles to have radii of 4-5 mm using Cassini RSS data. Harbison et al. (2013) used VIMS solar occultations and also found minimum particle sizes of 4-5 mm in the C ring with q ~ 3.1, where n(a)da=Ca^(-q)da is the assumed differential power-law size distribution for particles of radius a. Recent studies of excess variance in stellar signal by Colwell et al. (2017, submitted) constrain the cross-section-weighted effective particle radius to 1 m to several meters. Using the wide range of viewing geometries available to VIMS and UVIS stellar occultations we find that normal optical depth does not strongly depend on viewing geometry at 10km resolution (which would be the case if self-gravity wakes were present). Throughout the C ring, we fit power-law derived optical depths to those measured by UVIS, VIMS, and by the Cassini Radio Science Subsystem (RSS) at 0.94 and 3.6 cm wavelengths to constrain the four parameters of the size distribution at 10km radial resolution. We find significant amounts of particle size sorting throughout the region with a positive correlation between maximum particles size (amax) and normal optical depth with a mean value of amax ~ 3 m in the background C ring. This correlation is negative in the C ring plateaus. We find an inverse correlation in minimum particle radius with normal optical depth and a mean value of amin ~ 4 mm in the background C ring with slightly larger smallest particles in the C ring plateaus.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Feiveson, Alan H.; Shackelford, Linda; Rianon, Nahida; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Done Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift, The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values ( < 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Rianon, Nahid; Feiveson, Alan; Shackelford, Linda; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Bone Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift. The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values (less than 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-07-14
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
Variance adaptation in navigational decision making
NASA Astrophysics Data System (ADS)
Gershow, Marc; Gepner, Ruben; Wolk, Jason; Wadekar, Digvijay
Drosophila larvae navigate their environments using a biased random walk strategy. A key component of this strategy is the decision to initiate a turn (change direction) in response to declining conditions. We modeled this decision as the output of a Linear-Nonlinear-Poisson cascade and used reverse correlation with visual and fictive olfactory stimuli to find the parameters of this model. Because the larva responds to changes in stimulus intensity, we used stimuli with uncorrelated normally distributed intensity derivatives, i.e. Brownian processes, and took the stimulus derivative as the input to our LNP cascade. In this way, we were able to present stimuli with 0 mean and controlled variance. We found that the nonlinear rate function depended on the variance in the stimulus input, allowing larvae to respond more strongly to small changes in low-noise compared to high-noise environments. We measured the rate at which the larva adapted its behavior following changes in stimulus variance, and found that larvae adapted more quickly to increases in variance than to decreases, consistent with the behavior of an optimal Bayes estimator. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Xu, Nan; Veesler, David; Doerschuk, Peter C; Johnson, John E
2018-05-01
The information content of cryo EM data sets exceeds that of the electron scattering potential (cryo EM) density initially derived for structure determination. Previously we demonstrated the power of data variance analysis for characterizing regions of cryo EM density that displayed functionally important variance anomalies associated with maturation cleavage events in Nudaurelia Omega Capensis Virus and the presence or absence of a maturation protease in bacteriophage HK97 procapsids. Here we extend the analysis in two ways. First, instead of imposing icosahedral symmetry on every particle in the data set during the variance analysis, we only assume that the data set as a whole has icosahedral symmetry. This change removes artifacts of high variance along icosahedral symmetry axes, but retains all of the features previously reported in the HK97 data set. Second we present a covariance analysis that reveals correlations in structural dynamics (variance) between the interior of the HK97 procapsid with the protease and regions of the exterior (not seen in the absence of the protease). The latter analysis corresponds well with hydrogen deuterium exchange studies previously published that reveal the same correlation. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Salvucci, Guido D.; Rigden, Angela J.; Jung, Martin; Collatz, G. James; Schubert, Siegfried D.
2015-01-01
The spatial pattern across the continental United States of the interannual variance of warm season water-dependent evapotranspiration, a pattern of relevance to land-atmosphere feedback, cannot be measured directly. Alternative and indirect approaches to estimating the pattern, however, do exist, and given the uncertainty of each, we use several such approaches here. We first quantify the water dependent evapotranspiration variance pattern inherent in two derived evapotranspiration datasets available from the literature. We then search for the pattern in proxy geophysical variables (air temperature, stream flow, and NDVI) known to have strong ties to evapotranspiration. The variances inherent in all of the different (and mostly independent) data sources show some differences but are generally strongly consistent they all show a large variance signal down the center of the U.S., with lower variances toward the east and (for the most part) toward the west. The robustness of the pattern across the datasets suggests that it indeed represents the pattern operating in nature. Using Budykos hydroclimatic framework, we show that the pattern can largely be explained by the relative strength of water and energy controls on evapotranspiration across the continent.
Reyes, Mayra I; Pérez, Cynthia M; Negrón, Edna L
2008-03-01
Consumers increasingly use bottled water and home water treatment systems to avoid direct tap water. According to the International Bottled Water Association (IBWA), an industry trade group, 5 billion gallons of bottled water were consumed by North Americans in 2001. The principal aim of this study was to assess the microbial quality of in-house and imported bottled water for human consumption, by measurement and comparison of the concentration of bacterial endotoxin and standard cultivable methods of indicator microorganisms, specifically, heterotrophic and fecal coliform plate counts. A total of 21 brands of commercial bottled water, consisting of 10 imported and 11 in-house brands, selected at random from 96 brands that are consumed in Puerto Rico, were tested at three different time intervals. The Standard Limulus Amebocyte Lysate test, gel clot method, was used to measure the endotoxin concentrations. The minimum endotoxin concentration in 63 water samples was less than 0.0625 EU/mL, while the maximum was 32 EU/mL. The minimum bacterial count showed no growth, while the maximum was 7,500 CFU/mL. Bacterial isolates like P. fluorescens, Corynebacterium sp. J-K, S. paucimobilis, P. versicularis, A. baumannii, P. chlororaphis, F. indologenes, A. faecalis and P. cepacia were identified. Repeated measures analysis of variance demonstrated that endotoxin concentration did not change over time, while there was a statistically significant (p < 0.05) decrease in bacterial count over time. In addition, multiple linear regression analysis demonstrated that a unit change in the concentration of endotoxin across time was associated with a significant (p < 0.05) reduction in the bacteriological cell count. This analysis evidenced a significant time effect in the average log bacteriological cell count. Although bacterial growth was not detected in some water samples, endotoxin was present. Measurement of Gram-negative bacterial endotoxins is one of the methods that have been suggested as a rapid way of determining bacteriological water quality.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-01-01
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064
Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.
Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya
2013-08-20
It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.
NASA Astrophysics Data System (ADS)
Anick, David J.
2010-04-01
For (H2O)20X water clusters consisting of X enclosed by the 512 dodecahedral cage, X=empty, H2O, NH3, and H3O+, databases are made consisting of 55-82 isomers optimized via B3LYP/6-311++G∗∗. Correlations are explored between ground state electronic energy (Ee) or electronic energy plus zero point energy (Ee+ZPE) and the clusters' topology, defined as the set of directed H-bonds. Linear regression is done to identify topological features that correlate with cluster energy. For each X, variables are found that account for 99% of the variance in Ee and predict it with a rms error under 0.2 kcal/mol. The method of analysis emphasizes the importance of an intermediate level of structure, the "O-topology," consisting of O-types and a list of O pairs that are bonded but omitting H-bond directions, as a device to organize the databases and reduce the number of structures one needs to consider. Relevant variables include three parameters, which count the number of H-bonds having particular donor and acceptor types; |M|2, where M is the cluster's vector dipole moment; and the projection of M onto the symmetry axis of X. Scatter diagrams for Ee or Ee+ZPE versus |M| show that clusters fall naturally into "families" defined by the values of certain discrete parameters, the "major parameters," for each X. Combining "family" analysis and O-topologies, a small group of clusters is identified for each X that are candidates to be the global minimum, and the minimum is determined. For X=H3O+, one cluster with central hydronium lies just 2.08 kcal/mol above the lowest isomer with surface hydronium. Implications of the methodology for dodecahedral (H2O)20(NH4+) and (H2O)20(NH4+)(OH-) are discussed, and new lower energy isomers are found. For MP2/TZVP, the lowest-energy (H2O)20(NH4+) isomer features a trifurcated H-bond. The results suggest a much more efficient and comprehensive way of seeking low-energy water cluster geometries that may have wide applicability.
Anick, David J
2010-04-28
For (H(2)O)(20)X water clusters consisting of X enclosed by the 5(12) dodecahedral cage, X = empty, H(2)O, NH(3), and H(3)O(+), databases are made consisting of 55-82 isomers optimized via B3LYP/6-311++G(**). Correlations are explored between ground state electronic energy (Ee) or electronic energy plus zero point energy (Ee+ZPE) and the clusters' topology, defined as the set of directed H-bonds. Linear regression is done to identify topological features that correlate with cluster energy. For each X, variables are found that account for 99% of the variance in Ee and predict it with a rms error under 0.2 kcal/mol. The method of analysis emphasizes the importance of an intermediate level of structure, the "O-topology," consisting of O-types and a list of O pairs that are bonded but omitting H-bond directions, as a device to organize the databases and reduce the number of structures one needs to consider. Relevant variables include three parameters, which count the number of H-bonds having particular donor and acceptor types; absolute value(M)(2), where M is the cluster's vector dipole moment; and the projection of M onto the symmetry axis of X. Scatter diagrams for Ee or Ee+ZPE versus absolute value(M) show that clusters fall naturally into "families" defined by the values of certain discrete parameters, the "major parameters," for each X. Combining "family" analysis and O-topologies, a small group of clusters is identified for each X that are candidates to be the global minimum, and the minimum is determined. For X = H(3)O(+), one cluster with central hydronium lies just 2.08 kcal/mol above the lowest isomer with surface hydronium. Implications of the methodology for dodecahedral (H(2)O)(20)(NH(4)(+)) and (H(2)O)(20)(NH(4)(+))(OH(-)) are discussed, and new lower energy isomers are found. For MP2/TZVP, the lowest-energy (H(2)O)(20)(NH(4)(+)) isomer features a trifurcated H-bond. The results suggest a much more efficient and comprehensive way of seeking low-energy water cluster geometries that may have wide applicability.
Versatile Gaussian probes for squeezing estimation
NASA Astrophysics Data System (ADS)
Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo
2017-05-01
We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.
Applications of GARCH models to energy commodities
NASA Astrophysics Data System (ADS)
Humphreys, H. Brett
This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.
2013-12-01
A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.
Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Richardson, I. G.; Ling, A. G.
2011-01-01
In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Third-Party Assessment of PTC System Safety Verification and Validation F Appendix F to Part 236... Safety Verification and Validation (a) This appendix provides minimum requirements for mandatory independent third-party assessment of PTC system safety verification and validation pursuant to subpart H or I...
Recent Trends in Advance Directives at Nursing Home Admission and One Year after Admission
ERIC Educational Resources Information Center
McAuley, William J.; Buchanan, Robert J.; Travis, Shirley S.; Wang, Suojin; Kim, MyungSuk
2006-01-01
Purpose: Advance directives are important planning and decision-making tools for individuals in nursing homes. Design and Methods: By using the nursing facility Minimum Data Set, we examined the prevalence of advance directives at admission and 12 months post-admission. Results: The prevalence of having any advance directive at admission declined…
Code of Federal Regulations, 2010 CFR
2010-01-01
... ANALYSIS, DEPARTMENT OF COMMERCE DIRECT INVESTMENT SURVEYS § 806.1 Purpose. The purpose of this part is to... concerning direct investment as required by, or provided for in, the International Investment Survey Act of... investment, including direct investment, and to do so with a minimum of burden on respondents and with no...
1951-05-01
prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may
Thrust Direction Optimization: Satisfying Dawn's Attitude Agility Constraints
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2013-01-01
The science objective of NASA's Dawn Discovery mission is to explore the giant asteroid Vesta and the dwarf planet Ceres, the two largest members of the main asteroid belt. Dawn successfully completed its orbital mission at Vesta. The Dawn spacecraft has complex, difficult to quantify, and in some cases severe limitations on its attitude agility. The low-thrust transfers between science orbits at Vesta required very complex time varying thrust directions due to the strong and complex gravity and various science objectives. Traditional low-thrust design objectives (like minimum change in velocity or minimum transfer time) often result in thrust direction time evolutions that cannot be accommodated with the attitude control system available on Dawn. This paper presents several new optimal control objectives, collectively called thrust direction optimization that were developed and turned out to be essential to the successful navigation of Dawn at Vesta.
Non-additive genetic variation in growth, carcass and fertility traits of beef cattle.
Bolormaa, Sunduimijid; Pryce, Jennie E; Zhang, Yuandan; Reverter, Antonio; Barendse, William; Hayes, Ben J; Goddard, Michael E
2015-04-02
A better understanding of non-additive variance could lead to increased knowledge on the genetic control and physiology of quantitative traits, and to improved prediction of the genetic value and phenotype of individuals. Genome-wide panels of single nucleotide polymorphisms (SNPs) have been mainly used to map additive effects for quantitative traits, but they can also be used to investigate non-additive effects. We estimated dominance and epistatic effects of SNPs on various traits in beef cattle and the variance explained by dominance, and quantified the increase in accuracy of phenotype prediction by including dominance deviations in its estimation. Genotype data (729 068 real or imputed SNPs) and phenotypes on up to 16 traits of 10 191 individuals from Bos taurus, Bos indicus and composite breeds were used. A genome-wide association study was performed by fitting the additive and dominance effects of single SNPs. The dominance variance was estimated by fitting a dominance relationship matrix constructed from the 729 068 SNPs. The accuracy of predicted phenotypic values was evaluated by best linear unbiased prediction using the additive and dominance relationship matrices. Epistatic interactions (additive × additive) were tested between each of the 28 SNPs that are known to have additive effects on multiple traits, and each of the other remaining 729 067 SNPs. The number of significant dominance effects was greater than expected by chance and most of them were in the direction that is presumed to increase fitness and in the opposite direction to inbreeding depression. Estimates of dominance variance explained by SNPs varied widely between traits, but had large standard errors. The median dominance variance across the 16 traits was equal to 5% of the phenotypic variance. Including a dominance deviation in the prediction did not significantly increase its accuracy for any of the phenotypes. The number of additive × additive epistatic effects that were statistically significant was greater than expected by chance. Significant dominance and epistatic effects occur for growth, carcass and fertility traits in beef cattle but they are difficult to estimate precisely and including them in phenotype prediction does not increase its accuracy.
Genetic and environmental factors affecting perinatal and preweaning survival of D'man lambs.
Boujenane, Ismaïl; Chikhi, Abdelkader; Lakcher, Oumaïma; Ibnelbachyr, Mustapha
2013-08-01
This study examined the viability of 4,554 D'man lambs born alive at Errachidia research station in south-eastern Morocco between 1988 and 2009. Lamb survival to 1, 10, 30 and 90 days old was 0.95, 0.93, 0.93 and 0.92, respectively. The majority of deaths (85.7%) occurred before 10 days of age. Type and period of birth both had a significant effect on lamb survival traits, whereas age of dam and sex of lamb did not. The study revealed a curvilinear relationship between lamb's birth weight and survival traits from birth to 90 days, with optimal birth weights for maximal perinatal and preweaning survival varying according to type of birth from 2.6 to 3.5 kg. Estimation of variance components, using an animal model including direct and maternal genetic effects, the permanent maternal environment as well as fixed effects, showed that direct and maternal heritability estimates for survival traits between birth and 90 days were mostly low and varied from 0.01 to 0.10; however, direct heritability for survival at 1 day from birth was estimated at 0.63. Genetic correlations between survival traits and birth weight were positive and low to moderate. It was concluded that survival traits of D'man lambs between birth and 90 days could be improved through selection, but genetic progress would be low. However, the high proportion of the residual variance to total variance reinforces the need to improve management and lambing conditions.
van Leeuwen, Christel M; Post, Marcel W; Westers, Paul; van der Woude, Lucas H; de Groot, Sonja; Sluis, Tebbe; Slootman, Hans; Lindeman, Eline
2012-01-01
To clarify relationships between activities, participation, mental health, and life satisfaction in persons with spinal cord injury (SCI) and specify how personal factors (self-efficacy, neuroticism, appraisals) interact with these components. We hypothesized that (1) activities are related directly to participation, participation is related directly to mental health and life satisfaction, and mental health and life satisfaction are 2 interrelated outcome variables; and (2) appraisals are mediators between participation and mental health and life satisfaction, and self-efficacy and neuroticism are related directly to mental health and life satisfaction and indirectly through appraisals. Follow-up measurement of a multicenter prospective cohort study 5 years after discharge from inpatient rehabilitation. Eight Dutch rehabilitation centers with specialized SCI units. Persons (N=143) aged 18 to 65 years at the onset of SCI. Not applicable. Mental health was measured by using the Mental Health subscale of the 36-Item Short Form Health Survey and life satisfaction with the sum score of "current life satisfaction" and "current life satisfaction compared with life satisfaction before SCI." Structural equation modeling showed that activities and neuroticism were related to participation and explained 49% of the variance in participation. Self-efficacy, neuroticism, and 2 appraisals were related to mental health and explained 35% of the variance in mental health. Participation, 3 appraisals, and mental health were related to life satisfaction and together explained 50% of the total variance in life satisfaction. Mental health and life satisfaction can be seen as 2 separate but interrelated outcome variables. Self-efficacy and neuroticism are related directly to mental health and indirectly to life satisfaction through the mediating role of appraisals. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Excoffier, L; Smouse, P E; Quattro, J M
1992-06-01
We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as phi-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivision. The method is flexible enough to accommodate several alternative input matrices, corresponding to different types of molecular data, as well as different types of evolutionary assumptions, without modifying the basic structure of the analysis. The significance of the variance components and phi-statistics is tested using a permutational approach, eliminating the normality assumption that is conventional for analysis of variance but inappropriate for molecular data. Application of AMOVA to human mitochondrial DNA haplotype data shows that population subdivisions are better resolved when some measure of molecular differences among haplotypes is introduced into the analysis. At the intraspecific level, however, the additional information provided by knowing the exact phylogenetic relations among haplotypes or by a nonlinear translation of restriction-site change into nucleotide diversity does not significantly modify the inferred population genetic structure. Monte Carlo studies show that site sampling does not fundamentally affect the significance of the molecular variance components. The AMOVA treatment is easily extended in several different directions and it constitutes a coherent and flexible framework for the statistical analysis of molecular data.
2-D or not 2-D, that is the question: A Northern California test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayeda, K; Malagnini, L; Phillips, W S
2005-06-06
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2003-01-01
In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.
Cryogenic sapphire oscillator using a low-vibration design pulse-tube cryocooler: first results.
Hartnett, John; Nand, Nitin; Wang, Chao; Floch, Jean-Michel
2010-05-01
A cryogenic sapphire oscillator (CSO) has been implemented at 11.2 GHz using a low-vibration design pulsetube cryocooler. Compared with a state-of-the-art liquid helium cooled CSO in the same laboratory, the square root Allan variance of their combined fractional frequency instability is sigma(y) = 1.4 x 10(-15)tau(-1/2) for integration times 1 < tau < 10 s, dominated by white frequency noise. The minimum sigmay = 5.3 x 10(-16) for the two oscillators was reached at tau = 20 s. Assuming equal contributions from both CSOs, the single oscillator phase noise S(phi) approximately -96 dB x rad(2)/Hz at 1 Hz set from the carrier.
Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45
NASA Astrophysics Data System (ADS)
Chin, Liem; Chendra, Erwinna; Sukmana, Agus
2018-01-01
To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Quaternion-valued single-phase model for three-phase power system
NASA Astrophysics Data System (ADS)
Gou, Xiaoming; Liu, Zhiwen; Liu, Wei; Xu, Yougen; Wang, Jiabin
2018-03-01
In this work, a quaternion-valued model is proposed in lieu of the Clarke's α, β transformation to convert three-phase quantities to a hypercomplex single-phase signal. The concatenated signal can be used for harmonic distortion detection in three-phase power systems. In particular, the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain, while the Clarke's transformation-based methods will fail to detect the zero sequence voltages. Based on the quaternion-valued model, the Fourier transform, the minimum variance distortionless response (MVDR) algorithm and the multiple signal classification (MUSIC) algorithm are presented as examples to detect harmonic distortion. Simulations are provided to demonstrate the potentials of this new modeling method.
Object aggregation using Neyman-Pearson analysis
NASA Astrophysics Data System (ADS)
Bai, Li; Hinman, Michael L.
2003-04-01
This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.
Standing wave contributions to the linear interference effect in stratosphere-troposphere coupling
NASA Astrophysics Data System (ADS)
Watt-Meyer, Oliver; Kushner, Paul
2014-05-01
A body of literature by Hayashi and others [Hayashi 1973, 1977, 1979; Pratt, 1976] developed a decomposition of the wavenumber-frequency spectrum into standing and travelling waves. These techniques directly decompose the power spectrum—that is, the amplitudes squared—into standing and travelling parts. This, incorrectly, does not allow for a term representing the covariance between these waves. We propose a simple decomposition based on the 2D Fourier transform which allows one to directly compute the variance of the standing and travelling waves, as well as the covariance between them. Applying this decomposition to geopotential height anomalies in the Northern Hemisphere winter, we show the dominance of standing waves for planetary wavenumbers 1 through 3, especially in the stratosphere, and that wave-1 anomalies have a significant westward travelling component in the high-latitude (60N to 80N) troposphere. Variations in the relative zonal phasing between a wave anomaly and the background climatological wave pattern—the "linear interference" effect—are known to explain a large part of the planetary wave driving of the polar stratosphere in both hemispheres. While the linear interference effect is robust across observations, models of varying degrees of complexity, and in response to various types of perturbations, it is not well understood dynamically. We use the above-described decomposition into standing and travelling waves to investigate the drivers of linear interference. We find that the linear part of the wave activity flux is primarily driven by the standing waves, at all vertical levels. This can be understood by noting that the longitudinal positions of the antinodes of the standing waves are typically close to being aligned with the maximum and minimum of the background climatology. We discuss implications for predictability of wave activity flux, and hence polar vortex strength variability.
Ion Bernstein instability as a possible source for oxygen ion cyclotron harmonic waves
NASA Astrophysics Data System (ADS)
Min, Kyungguk; Denton, Richard E.; Liu, Kaijun; Gary, S. Peter; Spence, Harlan E.
2017-05-01
This paper demonstrates that an ion Bernstein instability can be a possible source for recently reported electromagnetic waves with frequencies at or near the singly ionized oxygen ion cyclotron frequency, ΩO+, and its harmonics. The particle measurements during strong wave activity revealed a relatively high concentration of oxygen ions (˜15%) whose phase space density exhibits a local peak at energy ˜20 keV. Given that the electron plasma-to-cyclotron frequency ratio is ωpe/Ωe≳1, this energy corresponds to the particle speed v/vA≳0.3, where vA is the oxygen Alfvén speed. Using the observational key plasma parameters, a simplified ion velocity distribution is constructed, where the local peak in the oxygen ion velocity distribution is represented by an isotropic shell distribution. Kinetic linear dispersion theory then predicts unstable Bernstein modes at or near the harmonics of ΩO+ and at propagation quasi-perpendicular to the background magnetic field, B0. If the cold ions are mostly protons, these unstable modes are characterized by a low compressibility (|δB∥|2/|δB|2≲0.01), a small phase speed (vph˜0.2vA), a relatively small ratio of the electric field energy to the magnetic field energy (between 10-4 and 10-3), and the Poynting vector directed almost parallel to B0. These linear properties are overall in good agreement with the properties of the observed waves. We demonstrate that superposition of the predicted unstable Bernstein modes at quasi-perpendicular propagation can produce the observed polarization properties, including the minimum variance direction on average almost parallel to B0.
25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?
Code of Federal Regulations, 2013 CFR
2013-04-01
... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...
25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?
Code of Federal Regulations, 2012 CFR
2012-04-01
... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...
25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?
Code of Federal Regulations, 2011 CFR
2011-04-01
... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...
25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?
Code of Federal Regulations, 2014 CFR
2014-04-01
... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...
25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?
Code of Federal Regulations, 2010 CFR
2010-04-01
... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-02
... 17,000-lb center wing tank (CWT) minimum fuel amount to select the CWT override/jettison pumps ON... the Boeing comment for the reasons provided and because the certification limitation for CWT minimum... prior FAA approvals. The note specified the following: ``The CWT and the HST may be emptied normally...
7 CFR 4280.161 - Direct Loan Process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE LOANS AND GRANTS Renewable Energy Systems and Energy... available for direct loans; (2) Applicant and project eligibility criteria; (3) Minimum and maximum loan...; (11) Construction planning and performing development; (12) Requirements after project construction...
Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren
2014-10-20
This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.
Refining a case-mix measure for nursing homes: Resource Utilization Groups (RUG-III).
Fries, B E; Schneider, D P; Foley, W J; Gavazzi, M; Burke, R; Cornelius, E
1994-07-01
A case-mix classification system for nursing home residents is developed, based on a sample of 7,658 residents in seven states. Data included a broad assessment of resident characteristics, corresponding to items of the Minimum Data Set, and detailed measurement of nursing staff care time over a 24-hour period and therapy staff time over a 1-week period. The Resource Utilization Groups, Version III (RUG-III) system, with 44 distinct groups, achieves 55.5% variance explanation of total (nursing and therapy) per diem cost and meets goals of clinical validity and payment incentives. The mean resource use (case-mix index) of groups spans a nine-fold range. The RUG-III system improves on an earlier version not only by increasing the variance explanation (from 43%), but, more importantly, by identifying residents with "high tech" procedures (e.g., ventilators, respirators, and parenteral feeding) and those with cognitive impairments; by using better multiple activities of daily living; and by providing explicit qualifications for the Medicare nursing home benefit. RUG-III is being implemented for nursing home payment in 11 states (six as part of a federal multistate demonstration) and can be used in management, staffing level determination, and quality assurance.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
The economic burden of meningitis to households in Kassena-Nankana district of Northern Ghana.
Akweongo, Patricia; Dalaba, Maxwell A; Hayden, Mary H; Awine, Timothy; Nyaaba, Gertrude N; Anaseba, Dominic; Hodgson, Abraham; Forgor, Abdulai A; Pandya, Rajul
2013-01-01
To estimate the direct and indirect costs of meningitis to households in the Kassena-Nankana District of Ghana. A Cost of illness (COI) survey was conducted between 2010 and 2011. The COI was computed from a retrospective review of 80 meningitis cases answers to questions about direct medical costs, direct non-medical costs incurred and productivity losses due to recent meningitis incident. The average direct and indirect costs of treating meningitis in the district was GH¢152.55 (US$101.7) per household. This is equivalent to about two months minimum wage earned by Ghanaians in unskilled paid jobs in 2009. Households lost 29 days of work per meningitis case and thus those in minimum wage paid jobs lost a monthly minimum wage of GH¢76.85 (US$51.23) due to the illness. Patients who were insured spent an average of GH¢38.5 (US$25.67) in direct medical costs whiles the uninsured patients spent as much as GH¢177.9 (US$118.6) per case. Patients with sequelae incurred additional costs of GH¢22.63 (US$15.08) per case. The least poor were more exposed to meningitis than the poorest. Meningitis is a debilitating but preventable disease that affects people living in the Sahel and in poorer conditions. The cost of meningitis treatment may further lead to impoverishment for these households. Widespread mass vaccination will save households' an equivalent of GH¢175.18 (US$117) and impairment due to meningitis.
ERIC Educational Resources Information Center
Wilkes, Sam T.; Blackbourn, Joe M.
This project attempts to refine the Zones of Indifference Instrument, (included in appendix) that measures zones of indifference of teachers to typical directives issued by administrators. As a result of the original validation study, a 78-item, two-factor instrument was developed. These two factors explained 52 percent of the variance. The…
Variance components for direct and maternal effects on body weights of Katahdin lambs
USDA-ARS?s Scientific Manuscript database
The aim of this study was to estimate genetic parameters for BW in Katahdin lambs. Six animal models were used to study direct and maternal effects on birth (BWT), weaning (WWT) and postweaning (PWWT) weights using 41,066 BWT, 33,980 WWT, and 22,793 PWWT records collected over 17 yr in 100 flocks. F...
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.; Henson, Robert
2012-01-01
A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…
NASA Astrophysics Data System (ADS)
Bian, Zunjian; du, yongming; li, hua
2016-04-01
Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
2017-10-28
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
The Principle of Energetic Consistency
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.
On methods of estimating cosmological bulk flows
NASA Astrophysics Data System (ADS)
Nusser, Adi
2016-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.
Statistical independence of the initial conditions in chaotic mixing.
García de la Cruz, J M; Vassilicos, J C; Rossi, L
2017-11-01
Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.
New polymorphs of 9-nitro-camptothecin prepared using a supercritical anti-solvent process.
Huang, Yinxia; Wang, Hongdi; Liu, Guijin; Jiang, Yanbin
2015-12-30
Recrystallization and micronization of 9-nitro-camptothecin (9-NC) has been investigated using the supercritical anti-solvent (SAS) technology in this study. Five operating factors, i.e., the type of organic solvent, the concentration of 9-NC in the solution, the flow rate of 9-NC solution, the precipitation pressure and the temperature, were optimized using a selected OA16 (4(5)) orthogonal array design and a series of characterizations were performed for all samples. The results showed that the processed 9-NC particles exhibited smaller particle size and narrower particle size distribution as compared with 9-NC raw material (Form I), and the optimum micronization conditions for preparing 9-NC with minimum particle size were determined by variance analysis, where the solvent plays the most important role in the formation and transformation of polymorphs. Three new polymorphic forms (Form II, III and IV) of 9-NC, which present different physicochemical properties, were generated after the SAS process. The predicted structures of the 9-NC crystals, which were consistent with the experiments, were performed from their experimental XRD data by the direct space approach using the Reflex module of Materials Studio. Meanwhile, the optimal sample (Form III) was proved to have higher cytotoxicity against the cancer cells, which suggested the therapeutic efficacy of 9-NC is polymorph-dependent. Copyright © 2015 Elsevier B.V. All rights reserved.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2009-06-02
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2002-01-01
In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2010-08-24
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
Fine-scale variability of isopycnal salinity in the California Current System
NASA Astrophysics Data System (ADS)
Itoh, Sachihiko; Rudnick, Daniel L.
2017-09-01
This paper examines the fine-scale structure and seasonal fluctuations of the isopycnal salinity of the California Current System from 2007 to 2013 using temperature and salinity profiles obtained from a series of underwater glider surveys. The seasonal mean distributions of the spectral power of the isopycnal salinity gradient averaged over submesoscale (12-30 km) and mesoscale (30-60 km) ranges along three survey lines off Monterey Bay, Point Conception, and Dana Point were obtained from 298 transects. The mesoscale and submesoscale variance increased as coastal upwelling caused the isopycnal salinity gradient to steepen. Areas of elevated variance were clearly observed around the salinity front during the summer then spread offshore through the fall and winter. The high fine-scale variances were observed typically above 25.8 kg m-3 and decreased with depth to a minimum at around 26.3 kg m-3. The mean spectral slope of the isopycnal salinity gradient with respect to wavenumber was 0.19 ± 0.27 over the horizontal scale of 12-60 km, and 31%-35% of the spectra had significantly positive slopes. In contrast, the spectral slope over 12-30 km was mostly flat, with mean values of -0.025 ± 0.32. An increase in submesoscale variability accompanying the steepening of the spectral slope was often observed in inshore areas; e.g., off Monterey Bay in winter, where a sharp front developed between the California Current and the California Under Current, and the lower layers of the Southern California Bight, where vigorous interaction between a synoptic current and bottom topography is to be expected.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Security practices and regulatory compliance in the healthcare industry.
Kwon, Juhee; Johnson, M Eric
2013-01-01
Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Hospitals in the highest level of compliance were significantly managing third parties' breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption.
Security practices and regulatory compliance in the healthcare industry
Kwon, Juhee; Johnson, M Eric
2013-01-01
Objective Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. Design We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. Measurement We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Results Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Conclusions Hospitals in the highest level of compliance were significantly managing third parties’ breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption. PMID:22955497
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
NASA Astrophysics Data System (ADS)
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
Adaptive color halftoning for minimum perceived error using the blue noise mask
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1997-04-01
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
NASA Astrophysics Data System (ADS)
He, Minhui; Yang, Bao; Datsenko, Nina M.
2014-08-01
The recent unprecedented warming found in different regions has aroused much attention in the past years. How temperature has really changed on the Tibetan Plateau (TP) remains unknown since very limited high-resolution temperature series can be found over this region, where large areas of snow and ice exist. Herein, we develop two Juniperus tibetica Kom. tree-ring width chronologies from different elevations. We found that the two tree-ring series only share high-frequency variability. Correlation, response function and partial correlation analysis indicate that prior year annual (January-December) minimum temperature is most responsible for the higher belt juniper radial growth, while more or less precipitation signal is contained by the tree-ring width chronology at the lower belt and is thus excluded from further analysis. The tree growth-climate model accounted for 40 % of the total variance in actual temperature during the common period 1957-2010. The detected temperature signal is further robustly verified by other results. Consequently, a six century long annual minimum temperature history was firstly recovered for the Yushu region, central TP. Interestingly, the rapid warming trend during the past five decades is identified as a significant cold phase in the context of the past 600 years. The recovered temperature series reflects low-frequency variability consistent with other temperature reconstructions over the whole TP region. Furthermore, the present recovered temperature series is associated with the Asian monsoon strength on decadal to multidecadal scales over the past 600 years.
Hospital and Community Characteristics Associated With Pediatric Direct Admission to Hospital.
Leyenaar, JoAnna K; Shieh, Meng-Shiou; Lagu, Tara; Pekow, Penelope S; Lindenauer, Peter K
2017-10-27
One quarter of pediatric hospitalizations begin as direct admissions, defined as hospitalization without receiving care in the hospital's emergency department (ED). Direct admission rates are highly variable across hospitals, yet previous studies have not examined reasons for this variation. We aimed to determine the relationships between hospital and community factors and pediatric direct admission rates, and to evaluate the degree to which these characteristics explain variation in risk-adjusted direct admission rates. We conducted a cross-sectional study of the Healthcare Cost and Utilization Project's Kids Inpatient Database, American Hospital Association Database, and Area Health Resource File, including children <18 years of age who were admitted for a medical hospitalization in states contributing data to all data sets. Using hierarchical generalized linear modeling, we generated risk-adjusted direct admission rates and used generalized linear models to assess the association of hospital and community characteristics with these risk-adjusted rates. We included 211,458 children discharged from 933 hospitals and 26 states; 20.2% were admitted directly. One-fifth of the variance in risk-adjusted direct admission rates was attributed to observed hospital and community factors. The greatest proportion of this explained variance was related to ED volume (37%), volume of pediatric hospitalizations (27%), and size of the pediatrician workforce (12%). Direct admission rates were associated with several hospital and community characteristics, but the majority of variation in hospitals' direct admission rates was not explained by these factors. These findings suggest opportunities for diverse hospital types to develop the infrastructure and communication systems necessary to support pediatric direct admissions. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Multiscale analysis of the CMB temperature derivatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es
2017-02-01
We study the Planck CMB temperature at different scales through its derivatives up to second order, which allows one to characterize the local shape and isotropy of the field. The problem of having an incomplete sky in the calculation and statistical characterization of the derivatives is addressed in the paper. The analysis confirms the existence of a low variance in the CMB at large scales, which is also noticeable in the derivatives. Moreover, deviations from the standard model in the gradient, curvature and the eccentricity tensor are studied in terms of extreme values on the data. As it is expected,more » the Cold Spot is detected as one of the most prominent peaks in terms of curvature, but additionally, when the information of the temperature and its Laplacian are combined, another feature with similar probability at the scale of 10{sup o} is also observed. However, the p -value of these two deviations increase above the 6% when they are referred to the variance calculated from the theoretical fiducial model, indicating that these deviations can be associated to the low variance anomaly. Finally, an estimator of the directional anisotropy for spinorial quantities is introduced, which is applied to the spinors derived from the field derivatives. An anisotropic direction whose probability is <1% is detected in the eccentricity tensor.« less
Uncertainty Propagation for Terrestrial Mobile Laser Scanner
NASA Astrophysics Data System (ADS)
Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas
2016-06-01
Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.
Analysis and interpretation of satellite fragmentation data
NASA Technical Reports Server (NTRS)
Tan, Arjun
1987-01-01
The velocity perturbations of the fragments of a satellite can shed valuable information regarding the nature and intensity of the fragmentation. A feasibility study on calculating the velocity perturbations from existing equations was carried out by analyzing 23 major documented fragmentation events. It was found that whereas the calculated values of the radial components of the velocity change were often unusually high, those in the two other orthogonal directions were mostly reasonable. Since the uncertainties in the radial component necessarily translate into uncertainties in the total velocity change, it is suggested that alternative expressions for the radial component of velocity be sought for the purpose of determining the cause of the fragmentation from the total velocity change. The calculated variances in the velocity perturbations in the two directions orthogonal to the radial vector indicate that they have the smallest values for collision induced breakups and the largest values for low-intensity explosion induced breakups. The corresponding variances for high-intensity explosion induced breakups generally have values intermediate between those of the two extreme categories. A three-dimensional plot of the variances in the two orthogonal velocity perturbations and the plane change angle shows a clear separation between the three major types of breakups. This information is used to reclassify a number of satellite fragmentation events of unknown category.
The Negative Impact of Organizational Cynicism on Physicians and Nurses
Volpe, Rebecca L.; Mohammed, Susan; Hopkins, Margaret; Shapiro, Daniel; Dellasega, Cheryl
2015-01-01
Despite the potentially severe consequences that could result, there is a paucity of research on organizational cynicism within US healthcare providers. In response, this study investigated the effect of cynicism on organizational commitment, job satisfaction, and interest in leaving the hospital for another job in a sample of 205 physicians and 842 nurses. Three types of cynicism were investigated: trait (dispositional), global (directed toward the hospital), and local (directed toward a specific unit or department). Findings indicate that all three types of cynicism were negatively related to affective organizational commitment and job satisfaction, but positively related to interest in leaving. In both nurse and physician samples, cynicism explained about half of the variance in job satisfaction and affective commitment, which is the type of commitment managers are most eager to promote. Cynicism accounted for about a quarter and a third of the variance in interest in leaving the hospital for nurses and physicians, respectively. Trait, global and local cynicism each accounted for unique variance in affective commitment, satisfaction, and interest in leaving, with global cynicism exerting the largest influence on each outcome. The implications for managers are that activities aimed at decreasing organizational cynicism are likely to increase affective organizational commitment, job satisfaction, and organizational tenure. PMID:25350015
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E
2010-07-01
Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.
NASA Technical Reports Server (NTRS)
Stanley, William D.
1994-01-01
An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.
Long term pavement performance directive : annual profiler-dipstick comparisons
DOT National Transportation Integrated Search
1996-11-25
The objective of this directive is to initiate a formal program for Profiler - Dipstick comparisons. These comparison tests should be performed as a minimum, on an annual basis, or within 90 days after major repairs to any of the LTPP profile measure...
Multimode laser beam analyzer instrument using electrically programmable optics.
Marraccini, Philip J; Riza, Nabeel A
2011-12-01
Presented is a novel design of a multimode laser beam analyzer using a digital micromirror device (DMD) and an electronically controlled variable focus lens (ECVFL) that serve as the digital and analog agile optics, respectively. The proposed analyzer is a broadband laser characterization instrument that uses the agile optics to smartly direct light to the required point photodetectors to enable beam measurements of minimum beam waist size, minimum waist location, divergence, and the beam propagation parameter M(2). Experimental results successfully demonstrate these measurements for a 500 mW multimode test laser beam with a wavelength of 532 nm. The minimum beam waist, divergence, and M(2) experimental results for the test laser are found to be 257.61 μm, 2.103 mrad, 1.600 and 326.67 μm, 2.682 mrad, 2.587 for the vertical and horizontal directions, respectively. These measurements are compared to a traditional scan method and the results of the beam waist are found to be within error tolerance of the demonstrated instrument.
A Fuel-Efficient Conflict Resolution Maneuver for Separation Assurance
NASA Technical Reports Server (NTRS)
Bowe, Aisha Ruth; Santiago, Confesor
2012-01-01
Automated separation assurance algorithms are envisioned to play an integral role in accommodating the forecasted increase in demand of the National Airspace System. Developing a robust, reliable, air traffic management system involves safely increasing efficiency and throughput while considering the potential impact on users. This experiment seeks to evaluate the benefit of augmenting a conflict detection and resolution algorithm to consider a fuel efficient, Zero-Delay Direct-To maneuver, when resolving a given conflict based on either minimum fuel burn or minimum delay. A total of twelve conditions were tested in a fast-time simulation conducted in three airspace regions with mixed aircraft types and light weather. Results show that inclusion of this maneuver has no appreciable effect on the ability of the algorithm to safely detect and resolve conflicts. The results further suggest that enabling the Zero-Delay Direct-To maneuver significantly increases the cumulative fuel burn savings when choosing resolution based on minimum fuel burn while marginally increasing the average delay per resolution.
New presentation method for magnetic resonance angiography images based on skeletonization
NASA Astrophysics Data System (ADS)
Nystroem, Ingela; Smedby, Orjan
2000-04-01
Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.
Weaver, Sallie J; Mossburg, Sarah E; Pillari, MarieSarah; Kent, Paula S; Daugherty Biddison, Elizabeth Lee
This study explored similarities and differences in the views on team membership and leadership held by nurses in formal unit leadership positions and direct care nurses. We used a mixed-methods approach and a maximum variance sampling strategy, sampling from units with both high and low safety behaviors and safety culture scores. We identified several key differences in mental models of care team membership and leadership between formal leaders and direct care nurses that warrant further exploration.
Eaves, Lindon J; Silberg, Judy L
2005-02-01
Several studies report apparent sibling contrast effects in analyses of twin resemblance. In the presence of genetic differences, contrast effects reduce the dizygotic (DZ) twin correlation relative to that in monozygotic (MZ) twins and produce higher DZ than MZ variance. Explanations of contrast effects are typically cast in terms of direct social interaction between twins or an artifact of the process of rating children by their parents. We outline a model for sibling imitation and contrast effects that depends on social interaction between parents and children. In addition to predicting the observed pattern of twin variances and covariances, the parental mediation of child imitation and contrast effects leads to differences in the variance of parents of MZ and DZ twins and differences between the correlations of parents with their MZ and DZ children.
Weighting Mean and Variability during Confidence Judgments
de Gardelle, Vincent; Mamassian, Pascal
2015-01-01
Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
Watkins, Marley W
2010-12-01
The structure of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; D. Wechsler, 2003a) was analyzed via confirmatory factor analysis among a national sample of 355 students referred for psychoeducational evaluation by 93 school psychologists from 35 states. The structure of the WISC-IV core battery was best represented by four first-order factors as per D. Wechsler (2003b), plus a general intelligence factor in a direct hierarchical model. The general factor was the predominate source of variation among WISC-IV subtests, accounting for 48% of the total variance and 75% of the common variance. The largest 1st-order factor, Processing Speed, only accounted for 6.1% total and 9.5% common variance. Given these explanatory contributions, recommendations favoring interpretation of the 1st-order factor scores over the general intelligence score appear to be misguided.
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Nagarajappa, Ramesh; Batra, Mehak; Sharda, Archana J; Asawa, Kailash; Sanadhya, Sudhanshu; Daryani, Hemasha; Ramesh, Gayathri
2015-01-01
To assess and compare the antimicrobial potential and determine the minimum inhibitory concentration (MIC) of Jasminum grandiflorum and Hibiscus rosa-sinensis extracts as potential anti-pathogenic agents in dental caries. Aqueous and ethanol (cold and hot) extracts prepared from leaves of Jasminum grandiflorum and Hibiscus rosa-sinensis were screened for in vitro antimicrobial activity against Streptococcus mutans and Lactobacillus acidophilus using the agar well diffusion method. The lowest concentration of every extract considered as the minimum inhibitory concentration (MIC) was determined for both test organisms. Statistical analysis was performed with one-way analysis of variance (ANOVA). At lower concentrations, hot ethanol Jasminum grandiflorum (10 μg/ml) and Hibiscus rosa-sinensis (25 μg/ml) extracts were found to have statistically significant (P≤0.05) antimicrobial activity against S. mutans and L. acidophilus with MIC values of 6.25 μg/ml and 25 μg/ml, respectively. A proportional increase in their antimicrobial activity (zone of inhibition) was observed. Both extracts were found to be antimicrobially active and contain compounds with therapeutic potential. Nevertheless, clinical trials on the effect of these plants are essential before advocating large-scale therapy.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
NASA Astrophysics Data System (ADS)
Schaperow, J.; Cooper, M. G.; Cooley, S. W.; Alam, S.; Smith, L. C.; Lettenmaier, D. P.
2017-12-01
As climate regimes shift, streamflows and our ability to predict them will change, as well. Elasticity of summer minimum streamflow is estimated for 138 unimpaired headwater river basins across the maritime western US mountains to better understand how climatologic variables and geologic characteristics interact to determine the response of summer low flows to winter precipitation (PPT), spring snow water equivalent (SWE), and summertime potential evapotranspiration (PET). Elasticities are calculated using log log linear regression, and linear reservoir storage coefficients are used to represent basin geology. Storage coefficients are estimated using baseflow recession analysis. On average, SWE, PET, and PPT explain about 1/3 of the summertime low flow variance. Snow-dominated basins with long timescales of baseflow recession are least sensitive to changes in SWE, PPT, and PET, while rainfall-dominated, faster draining basins are most sensitive. There are also implications for the predictability of summer low flows. The R2 between streamflow and SWE drops from 0.62 to 0.47 from snow-dominated to rain-dominated basins, while there is no corresponding increase in R2 between streamflow and PPT.
Evaluation of an active humidification system for inspired gas.
Roux, Nicolás G; Plotnikow, Gustavo A; Villalba, Darío S; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto
2015-03-01
The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate.
Reexamining the minimum viable population concept for long-lived species.
Shoemaker, Kevin T; Breisch, Alvin R; Jaycox, Jesse W; Gibbs, James P
2013-06-01
For decades conservation biologists have proposed general rules of thumb for minimum viable population size (MVP); typically, they range from hundreds to thousands of individuals. These rules have shifted conservation resources away from small and fragmented populations. We examined whether iteroparous, long-lived species might constitute an exception to general MVP guidelines. On the basis of results from a 10-year capture-recapture study in eastern New York (U.S.A.), we developed a comprehensive demographic model for the globally threatened bog turtle (Glyptemys muhlenbergii), which is designated as endangered by the IUCN in 2011. We assessed population viability across a wide range of initial abundances and carrying capacities. Not accounting for inbreeding, our results suggest that bog turtle colonies with as few as 15 breeding females have >90% probability of persisting for >100 years, provided vital rates and environmental variance remain at currently estimated levels. On the basis of our results, we suggest that MVP thresholds may be 1-2 orders of magnitude too high for many long-lived organisms. Consequently, protection of small and fragmented populations may constitute a viable conservation option for such species, especially in a regional or metapopulation context. © 2013 Society for Conservation Biology.
49 CFR 192.931 - How may Confirmatory Direct Assessment (CDA) be used?
Code of Federal Regulations, 2010 CFR
2010-10-01
...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.931 How may Confirmatory Direct Assessment (CDA) be used? An...
Particle tracking by using single coefficient of Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Widjaja, J.; Dawprateep, S.; Chuamchaitrakool, P.; Meemon, P.
2016-11-01
A new method for extracting information from particle holograms by using a single coefficient of Wigner-Ville distribution (WVD) is proposed to obviate drawbacks of conventional numerical reconstructions. Our previous study found that analysis of the holograms by using the WVD gives output coefficients which are mainly confined along a diagonal direction intercepted at the origin of the WVD plane. The slope of this diagonal direction is inversely proportional to the particle position. One of these coefficients always has minimum amplitude, regardless of the particle position. By detecting position of the coefficient with minimum amplitude in the WVD plane, the particle position can be accurately measured. The proposed method is verified through computer simulations.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Picone, Marco; Bergamin, Martina; Delaney, Eugenia; Ghirardini, Annamaria Volpi; Kusk, Kresten Ole
2018-01-01
The early-life stages of development of the calanoid copepod Acartia tonsa from egg to copepodite I is proposed as an endpoint for assessing sediment toxicity by exposing newly released eggs directly onto the sediment-water interface. A preliminary study of 5 sediment samples collected in the lagoon of Venice highlighted that the larval development rate (LDR) and the early-life stages (ELS) mortality endpoints with A. tonsa are more sensitive than the standard amphipod mortality test; moreover LDR resulted in a more reliable endpoint than ELS mortality, due to the interference of the sediment with the recovery of unhatched eggs and dead larvae. The LDR data collected in a definitive study of 48 sediment samples from the Venice Lagoon has been analysed together with the preliminary data to evaluate the statistical performances of the bioassay (among replicate variance and minimum significant difference between samples and control) and to investigate the possible correlation with sediment chemistry and physical properties. The results showed that statistical performances of the LDR test with A. tonsa correspond with the outcomes of other tests applied to the sediment-water interface (Strongylocentrotus purpuratus embryotoxicity test), sediments (Neanthes arenaceodentata survival and growth test) and porewater (S. purpuratus); the LDR endpoint did, however, show a slightly higher variance as compared with other tests used in the Lagoon of Venice, such as 10-d amphipod lethality test and larval development with sea urchin and bivalves embryos. Sediment toxicity data highlighted the high sensitivity and the clear ability of the larval development to discriminate among sediments characterized by different levels of contamination. The data of the definitive study evidenced that inhibition of the larval development was not affected by grain-size and the organic carbon content of the sediment; in contrast, a strong correlation between inhibition of the larval development and the sediment concentrations of some metals (Cu, Hg, Pb, Zn), acid-volatile sulphides (AVS), polychlorinated biphenyls (PCBs) and polynuclear aromatic hydrocarbons (PAHs) was found. No correlation was found with DDTs, hexachlorobenzene and organotin compounds. Copyright © 2017 Elsevier Inc. All rights reserved.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Theory of Financial Risk and Derivative Pricing
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2009-01-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Theory of Financial Risk and Derivative Pricing - 2nd Edition
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2003-12-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.
MRI brain tumor segmentation based on improved fuzzy c-means method
NASA Astrophysics Data System (ADS)
Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo
2009-10-01
This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.