Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Discrete filtering techniques applied to sequential GPS range measurements
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1987-01-01
The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.
Monte Carlo isotopic inventory analysis for complex nuclear systems
NASA Astrophysics Data System (ADS)
Phruksarojanakun, Phiphat
Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less
Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
2008-09-15
however, a variety of so-called variance-reduction techniques ( VRTs ) that have been developed, which reduce output variance with little or no...additional computational effort. VRTs typically achieve this via judicious and careful reuse of the basic underlying random nmnbers. Perhaps the best-known...typical simulation situation- change a weapons-system configuration and see what difference it makes). Key to making CRN and most other VRTs work
A preliminary study of the benefits of flying by ground speed during final approach
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.
1978-01-01
A study was conducted to evaluate the benefits of an approach technique which utilized constant ground speed on approach. It was determined that the technique reduced the capacity losses in headwinds experienced with the currently used constant airspeed technique. The benefits of technique were found to increase as headwinds increased and as the wake avoidance separation intervals were reduced. An additional benefit noted for the constant ground speed technique was a reduction in stopping distance variance due to the approach wind environment.
[Locked volar plating for complex distal radius fractures: maintaining radial length].
Jeudy, J; Pernin, J; Cronier, P; Talha, A; Massin, P
2007-09-01
Maintaining radial length, likely to be the main challenge in the treatment of complex distal radius fractures, is necessary for complete grip-strength and pro-supination range recovery. In spite of frequent secondary displacements, bridging external-fixation has remained the reference method, either isolated or in association with additional percutaneous pins or volar plating. Also, there seems to be a relation between algodystrophy and the duration of traction applied on the radio-carpal joint. Fixed-angle volar plating offers the advantage of maintaining the reduction until fracture healing, without bridging the joint. In a prospective study, forty-three consecutive fractures of the distal radius with a positivated ulnar variance were treated with open reduction and fixed-angle volar plating. Results were assessed with special attention to the radial length and angulation obtained and maintained throughout treatment, based on repeated measurements of the ulnar variance and radial angulation in the first six months postoperatively. The correction of the ulnar variance was maintained until complete recovery, independently of initial metaphyseal comminution, and of the amount of radial length gained at reduction. Only 3 patients lost more than 1 mm of radial length after reduction. The posterior tilt of the distal radial epiphysis was incompletely reduced in 13 cases, whereas reduction was partially lost in 6 elderly osteoporotic female patients. There was 8 articular malunions, all of them less than 2 mm. Secondary displacements were found to be related to a deficient locking technique. Eight patients developed an algodystropy. The risk factors for algodystrophy were articular malunion, associated posterior pining, and associated lesions of the ipsilateral upper limb. Provided that the locking technique was correct, this type of fixation appeared efficient in maintaining the radial length in complex fractures of the distal radius. The main challenge remains the reduction of displaced articular fractures. Based on these results, it is not possible to conclude that this method is superior to external fixation.
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
Analytic variance estimates of Swank and Fano factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
The key kinematic determinants of undulatory underwater swimming at maximal velocity.
Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross
2016-01-01
The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.
Decomposition of Some Well-Known Variance Reduction Techniques. Revision.
1985-05-01
34use a family of transformatlom to convert given samples into samples conditioned on a given characteristic (p. 04)." Dub and Horowitz (1979), Granovsky ...34Antithetic Varlates Revisited," Commun. ACM 26, 11, 064-971. Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte Carlo," SIAM J. Alg
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
NASA Astrophysics Data System (ADS)
El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.
2015-10-01
A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.
Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, D.
A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
LaMothe, Jeremy; Baxter, Josh R; Gilbert, Susannah; Murphy, Conor I; Karnovsky, Sydney C; Drakos, Mark C
2017-06-01
Syndesmotic injuries can be associated with poor patient outcomes and posttraumatic ankle arthritis, particularly in the case of malreduction. However, ankle joint contact mechanics following a syndesmotic injury and reduction remains poorly understood. The purpose of this study was to characterize the effects of a syndesmotic injury and reduction techniques on ankle joint contact mechanics in a biomechanical model. Ten cadaveric whole lower leg specimens with undisturbed proximal tibiofibular joints were prepared and tested in this study. Contact area, contact force, and peak contact pressure were measured in the ankle joint during simulated standing in the intact, injured, and 3 reduction conditions: screw fixation with a clamp, screw fixation without a clamp (thumb technique), and a suture-button construct. Differences in these ankle contact parameters were detected between conditions using repeated-measures analysis of variance. Syndesmotic disruption decreased tibial plafond contact area and force. Syndesmotic reduction did not restore ankle loading mechanics to values measured in the intact condition. Reduction with the thumb technique was able to restore significantly more joint contact area and force than the reduction clamp or suture-button construct. Syndesmotic disruption decreased joint contact area and force. Although the thumb technique performed significantly better than the reduction clamp and suture-button construct, syndesmotic reduction did not restore contact mechanics to intact levels. Decreased contact area and force with disruption imply that other structures are likely receiving more loads (eg, medial and lateral gutters), which may have clinical implications such as the development of posttraumatic arthritis.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Hersoug, Anne Grete
2004-12-01
My first focus of this study was to explore therapists' personal characteristics as predictors of the proportion of interpretation in brief dynamic psychotherapy (N=39; maximum 40 sessions). In this study, I used data from the Norwegian Multicenter Study on Process and Outcome of Psychotherapy (1995). The main finding was that therapists who had experienced good parental care gave less interpretation (28% variance was accounted for). Therapists who had more negative introjects used a higher proportion of interpretation (16% variance was accounted for). Patients' pretreatment characteristics were not predictive of therapists' use of interpretation. The second focus was to investigate the impact of therapists' personality and the proportion of interpretation on the development of patients' maladaptive defensive functioning over the course of therapy. Better parental care and less negative introjects in therapists were associated with a positive influence and accounted for 5% variance in the reduction of patients' maladaptive defense.
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
Donald Rumsfeld, designated “U.S. Transportation Command as the single Department of Defense Distribution Process Owner (DPO)” (USTRANSCOM, 2004...paragraphs explain OptQuest’s 54 functionality and capabilities as described by Laguna (1997) and Glover et al. (1999) as well as the OptQuest for ARENA...throughout the solution space ( Glover et al., 1999). Heuristics are strategies (in this case algorithms) that use different techniques and available
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Importance Sampling Variance Reduction in GRESS ATMOSIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakeford, Daniel Tyler
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
Ex Post Facto Monte Carlo Variance Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, Thomas E.
The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Mahmood, Toqeer; Irtaza, Aun; Mehmood, Zahid; Tariq Mahmood, Muhammad
2017-10-01
The most common image tampering often for malicious purposes is to copy a region of the same image and paste to hide some other region. As both regions usually have same texture properties, therefore, this artifact is invisible for the viewers, and credibility of the image becomes questionable in proof centered applications. Hence, means are required to validate the integrity of the image and identify the tampered regions. Therefore, this study presents an efficient way of copy-move forgery detection (CMFD) through local binary pattern variance (LBPV) over the low approximation components of the stationary wavelets. CMFD technique presented in this paper is applied over the circular regions to address the possible post processing operations in a better way. The proposed technique is evaluated on CoMoFoD and Kodak lossless true color image (KLTCI) datasets in the presence of translation, flipping, blurring, rotation, scaling, color reduction, brightness change and multiple forged regions in an image. The evaluation reveals the prominence of the proposed technique compared to state of the arts. Consequently, the proposed technique can reliably be applied to detect the modified regions and the benefits can be obtained in journalism, law enforcement, judiciary, and other proof critical domains. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Reduction of variance in spectral estimates for correction of ultrasonic aberration.
Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C
2006-01-01
A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Massage and Reiki used to reduce stress and anxiety: Randomized Clinical Trial
Kurebayashi, Leonice Fumiko Sato; Turrini, Ruth Natalia Teresa; de Souza, Talita Pavarini Borges; Takiguchi, Raymond Sehiji; Kuba, Gisele; Nagumo, Marisa Toshi
2016-01-01
ABTRACT Objective: to evaluate the effectiveness of massage and reiki in the reduction of stress and anxiety in clients at the Institute for Integrated and Oriental Therapy in Sao Paulo (Brazil). Method: clinical tests randomly done in parallel with an initial sample of 122 people divided into three groups: Massage + Rest (G1), Massage + Reiki (G2) and a Control group without intervention (G3). The Stress Systems list and the Trace State Anxiety Inventory were used to evaluate the groups at the start and after 8 sessions (1 month), during 2015. Results: there were statistical differences (p = 0.000) according to the ANOVA (Analysis of Variance) for the stress amongst the groups 2 and 3 (p = 0.014) with a 33% reductions and a Cohen of 0.78. In relation to anxiety-state, there was a reduction in the intervention groups compared with the control group (p < 0.01) with a 21% reduction in group 2 (Cohen of 1.18) and a 16% reduction for group 1 (Cohen of 1.14). Conclusion: Massage + Reiki produced better results amongst the groups and the conclusion is for further studies to be done with the use of a placebo group to evaluate the impact of the technique separate from other techniques. RBR-42c8wp PMID:27901219
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
Gao, Weimin; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction by clostridiamore » demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...
2016-08-13
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Physiological correlates of mental workload
NASA Technical Reports Server (NTRS)
Zacharias, G. L.
1980-01-01
A literature review was conducted to assess the basis of and techniques for physiological assessment of mental workload. The study findings reviewed had shortcomings involving one or more of the following basic problems: (1) physiologic arousal can be easily driven by nonworkload factors, confounding any proposed metric; (2) the profound absence of underlying physiologic models has promulgated a multiplicity of seemingly arbitrary signal processing techniques; (3) the unspecified multidimensional nature of physiological "state" has given rise to a broad spectrum of competing noncommensurate metrics; and (4) the lack of an adequate definition of workload compels physiologic correlations to suffer either from the vagueness of implicit workload measures or from the variance of explicit subjective assessments. Using specific studies as examples, two basic signal processing/data reduction techniques in current use, time and ensemble averaging are discussed.
Control algorithms for dynamic attenuators.
Hsieh, Scott S; Pelc, Norbert J
2014-06-01
The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
Improving Lidar Turbulence Estimates for Wind Energy: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew
2016-10-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Improving Lidar Turbulence Estimates for Wind Energy
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...
2016-10-03
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.
Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand
2017-12-15
The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-01-01
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818
NASA Astrophysics Data System (ADS)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...
2017-07-26
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P
2011-02-01
The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Comparison of noise reduction systems
NASA Astrophysics Data System (ADS)
Noel, S. D.; Whitaker, R. W.
1991-06-01
When using infrasound as a tool for verification, the most important measurement to determine yield has been the peak-to-peak pressure amplitude of the signal. Therefore, there is a need to operate at the most favorable signal-to-noise ratio (SNR) possible. Winds near the ground can degrade the SNR, thereby making accurate signal amplitude measurement difficult. Wind noise reduction techniques were developed to help alleviate this problem; however, a noise reducing system should reduce the noise, and should not introduce distortion of coherent signals. An experiment is described to study system response for a variety of noise reducing configurations to a signal generated by an underground test (UGT) at the Nevada Test Site (NTS). In addition to the signal, background noise reduction is examined through measurements of variance. Sensors using two particular geometries of noise reducing equipment, the spider and the cross appear to deliver the best SNR. Because the spider configuration is easier to deploy, it is now the most commonly used.
Singla, Mamta; Aggarwal, Vivek; Logani, Ajay; Shah, Naseem
2010-03-01
The purpose of this in vitro study was to evaluate the effect of various root canal instrumentation techniques with different instrument tapers on cleaning efficacy and resultant vertical root fracture (VRF) strength of the roots. Fifty human mandibular first premolar roots were enlarged to ISO size 20, inoculated with Enterococcus faecalis [ATCC2912] for 72 hours and divided into 5 groups: group I: prepared with .02 taper hand instruments ISO size 40; group II: Profile .04 taper size 40; group III: Profile .06 taper size 40; group IV: ProTaper size F4; and group V (control group) further divided into: Va: with bacterial inoculation and no mechanical instrumentation; and Group Vb: neither bacterial inoculation nor mechanical instrumentation. Cleaning efficacy was evaluated in terms of reduction of colony forming units (CFUs). The VRF strength was evaluated using D11 spreader as wedge in an Instron testing machine. Root canals instrumented with ProTaper and 6% Profile instruments showed maximum reduction in CFUs, with statistically insignificant difference between them. The VRF resistance decreased in all instrumented groups. The difference of VRF between 2% and 4% taper Profile groups was statistically insignificant (P = .195). One-way analysis of variance showed that canals instrumented with ProTaper F4 showed maximum reduction in VRF resistance compared with control uninstrumented group. Profile 6% taper instruments offer the advantage of maximum debridement without significant reduction in root fracture resistance. Copyright 2010 Mosby, Inc. All rights reserved.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Kamara, Eli; Robinson, Jonathon; Bas, Marcel A; Rodriguez, Jose A; Hepinstall, Matthew S
2017-01-01
Acetabulum positioning affects dislocation rates, component impingement, bearing surface wear rates, and need for revision surgery. Novel techniques purport to improve the accuracy and precision of acetabular component position, but may have a significant learning curve. Our aim was to assess whether adopting robotic or fluoroscopic techniques improve acetabulum positioning compared to manual total hip arthroplasty (THA) during the learning curve. Three types of THAs were compared in this retrospective cohort: (1) the first 100 fluoroscopically guided direct anterior THAs (fluoroscopic anterior [FA]) done by a surgeon learning the anterior approach, (2) the first 100 robotic-assisted posterior THAs done by a surgeon learning robotic-assisted surgery (robotic posterior [RP]), and (3) the last 100 manual posterior (MP) THAs done by each surgeon (200 THAs) before adoption of novel techniques. Component position was measured on plain radiographs. Radiographic measurements were taken by 2 blinded observers. The percentage of hips within the surgeons' "target zone" (inclination, 30°-50°; anteversion, 10°-30°) was calculated, along with the percentage within the "safe zone" of Lewinnek (inclination, 30°-50°; anteversion, 5°-25°) and Callanan (inclination, 30°-45°; anteversion, 5°-25°). Relative risk (RR) and absolute risk reduction (ARR) were calculated. Variances (square of the standard deviations) were used to describe the variability of cup position. Seventy-six percentage of MP THAs were within the surgeons' target zone compared with 84% of FA THAs and 97% of RP THAs. This difference was statistically significant, associated with a RR reduction of 87% (RR, 0.13 [0.04-0.40]; P < .01; ARR, 21%; number needed to treat, 5) for RP compared to MP THAs. Compared to FA THAs, RP THAs were associated with a RR reduction of 81% (RR, 0.19 [0.06-0.62]; P < .01; ARR, 13%; number needed to treat, 8). Variances were lower for acetabulum inclination and anteversion in RP THAs (14.0 and 19.5) as compared to the MP (37.5 and 56.3) and FA (24.5 and 54.6) groups. These differences were statistically significant (P < .01). Adoption of robotic techniques delivers significant and immediate improvement in the precision of acetabular component positioning during the learning curve. While fluoroscopy has been shown to be beneficial with experience, a learning curve exists before precision improves significantly. Copyright © 2016 Elsevier Inc. All rights reserved.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
ERIC Educational Resources Information Center
Stapleton, Laura M.
2008-01-01
This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2011-03-01
Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.
NASA Astrophysics Data System (ADS)
Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon
This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
Control Variate Estimators of Survivor Growth from Point Samples
Francis A. Roesch; Paul C. van Deusen
1993-01-01
Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...
Exploring factors affecting registered nurses' pursuit of postgraduate education in Australia.
Ng, Linda; Eley, Robert; Tuckett, Anthony
2016-12-01
The aim of this study was to explore the factors influencing registered nurses' pursuit of postgraduate education in specialty nursing practice in Australia. Despite the increased requirement for postgraduate education for advanced practice, little has been reported on the contributory factors involved in the decision to undertake further education. The Nurses' Attitudes Towards Postgraduate Education instrument was administered to 1632 registered nurses from the Nurses and Midwives e-Cohort Study across Australia, with a response rate of 35.9% (n = 568). Data reduction techniques using principal component analysis with varimax rotation were used. The analysis identified a three-factor solution for 14 items, accounting for 52.5% of the variance of the scale: "facilitators," "professional recognition," and "inhibiting factors." Facilitators of postgraduate education accounted for 28.5% of the variance, including: (i) improves knowledge; (ii) increases nurses' confidence in clinical decision-making; (iii) enhances nurses' careers; (iv) improves critical thinking; (v) improves nurses' clinical skill; and (vi) increased job satisfaction. This new instrument has potential clinical and research applications to support registered nurses' pursuit of postgraduate education. © 2016 John Wiley & Sons Australia, Ltd.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
Estimation of bias and variance of measurements made from tomography scans
NASA Astrophysics Data System (ADS)
Bradley, Robert S.
2016-09-01
Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Preference uncertainty, preference learning, and paired comparison experiments
David C. Kingsley; Thomas C. Brown
2010-01-01
Results from paired comparison experiments suggest that as respondents progress through a sequence of binary choices they become more consistent, apparently fine-tuning their preferences. Consistency may be indicated by the variance of the estimated valuation distribution measured by the error term in the random utility model. A significant reduction in the variance is...
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence
NASA Astrophysics Data System (ADS)
Cheminet, Adam; Blanquart, Guillaume
2011-11-01
Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.
Evaluation of a social cognitive theory-based yoga intervention to reduce anxiety.
Mehta, Purvi; Sharma, Manoj
Yoga is often viewed as a form of alternative and complementary medicine, as it strives to achieve equilibrium between the body and mind that aids healing. Studies have shown the beneficial role of yoga in anxiety reduction. The purpose of this study was to design and evaluate a 10-week social cognitive theory based yoga intervention to reduce anxiety. The yoga intervention utilized the constructs of behavioral capability, expectations, self-efficacy for yoga from social cognitive theory, and included asanas (postures), pranayama (breathing techniques), shava asana (relaxation), and dhyana (meditation). A one-between and one-within group, quasi-experimental design was utilized for evaluation. Scales measuring expectations from yoga, self-efficacy for yoga, and Speilberger's State Trait Anxiety Inventory, were administered before and after the intervention. Repeated measures analyses of variance (ANOVA) were performed to compare pre-test and post-test scores in the two groups. Yoga as an approach shows promising results for anxiety reduction.
Control Variates and Optimal Designs in Metamodeling
2013-03-01
27 2.4.5 Selection of Control Variates for Inclusion in Model...meet the normality assumption (Nelson 1990, Nelson and Yang 1992, Anonuevo and Nelson 1988). Jacknifing, splitting, and bootstrapping can be used to...freedom to estimate the variance are lost due to being used for the control variate inclusion . This means the variance reduction achieved must now be
Hassan, Afrah Fatima; Yadav, Gunjan; Tripathi, Abhay Mani; Mehrotra, Mridul; Saha, Sonali; Garg, Nishita
2016-01-01
Caries excavation is a noninvasive technique of caries removal with maximum preservation of healthy tooth structure. To compare the efficacy of three different caries excavation techniques in reducing the count of cariogenic flora. Sixty healthy primary molars were selected from 26 healthy children with occlusal carious lesions without pulpal involvement and divided into three groups in which caries excavation was done with the help of (1) carbide bur; (2) polymer bur using slow-speed handpiece; and (3) ultrasonic tip with ultrasonic machine. Samples were collected before and after caries excavation for microbiological analysis with the help of sterile sharp spoon excavator. Samples were inoculated on blood agar plate and incubated at 37°C for 48 hours. After bacterial cultivation, the bacterial count of Streptococcus mutans was obtained. All statistical analysis was performed using SPSS 13 statistical software version. Kruskal-Wallis analysis of variance, Wilcoxon matched pairs test, and Z test were performed to reveal the statistical significance. The decrease in bacterial count of S. mutans before and after caries excavation was significant (p < 0.001) in all the three groups. Carbide bur showed most efficient reduction in cariogenic flora, while ultrasonic tip showed almost comparable results, while polymer bur showed least reduction in cariogenic flora after caries excavation. Hassan AF, Yadav G, Tripathi AM, Mehrotra M, Saha S, Garg N. A Comparative Evaluation of the Efficacy of Different Caries Excavation Techniques in reducing the Cariogenic Flora: An in vivo Study. Int J Clin Pediatr Dent 2016;9(3):214-217.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less
Wolny, Tomasz; Saulicz, Edward; Linek, Paweł; Shacklock, Michael; Myśliwiec, Andrzej
2017-05-01
The purpose of this randomized trial was to compare the efficacy of manual therapy, including the use of neurodynamic techniques, with electrophysical modalities on patients with mild and moderate carpal tunnel syndrome (CTS). The study included 140 CTS patients who were randomly assigned to the manual therapy (MT) group, which included the use of neurodynamic techniques, functional massage, and carpal bone mobilizations techniques, or to the electrophysical modalities (EM) group, which included laser and ultrasound therapy. Nerve conduction, pain severity, symptom severity, and functional status measured by the Boston Carpal Tunnel Questionnaire were assessed before and after treatment. Therapy was conducted twice weekly and both groups received 20 therapy sessions. A baseline assessment revealed group differences in sensory conduction of the median nerve (P < .01) but not in motor conduction (P = .82). Four weeks after the last treatment procedure, nerve conduction was examined again. In the MT group, median nerve sensory conduction velocity increased by 34% and motor conduction velocity by 6% (in both cases, P < .01). There was no change in median nerve sensory and motor conduction velocities in the EM. Distal motor latency was decreased (P < .01) in both groups. A baseline assessment revealed no group differences in pain severity, symptom severity, or functional status. Immediately after therapy, analysis of variance revealed group differences in pain severity (P < .01), with a reduction in pain in both groups (MT: 290%, P < .01; EM: 47%, P < .01). There were group differences in symptom severity (P < .01) and function (P < .01) on the Boston Carpal Tunnel Questionnaire. Both groups had an improvement in functional status (MT: 47%, P < .01; EM: 9%, P < .01) and a reduction in subjective CTS symptoms (MT: 67%, P < .01; EM: 15%, P < .01). Both therapies had a positive effect on nerve conduction, pain reduction, functional status, and subjective symptoms in individuals with CTS. However, the results regarding pain reduction, subjective symptoms, and functional status were better in the MT group. Copyright © 2017. Published by Elsevier Inc.
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
1996-01-01
Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
MPF Top-Mast Measured Temperature
1997-10-14
This temperature figure shows the change in the mean and variance of the temperature fluctuations at the Pathfinder landing site. Sol 79 and 80 are very similar, with a significant reduction of the mean and variance on Sol 81. The science team suspects that a cold front has past of the landing sight between Sols 80 and 81. http://photojournal.jpl.nasa.gov/catalog/PIA00978
NASA Astrophysics Data System (ADS)
Yuksel, Heba; Davis, Christopher C.
2006-09-01
Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
Localized contourlet features in vehicle make and model recognition
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, B. S.
2009-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic Number Plate Recognition (ANPR) systems. Several vehicle MMR systems have been proposed in literature. In parallel to this, the usefulness of multi-resolution based feature analysis techniques leading to efficient object classification algorithms have received close attention from the research community. To this effect, Contourlet transforms that can provide an efficient directional multi-resolution image representation has recently been introduced. Already an attempt has been made in literature to use Curvelet/Contourlet transforms in vehicle MMR. In this paper we propose a novel localized feature detection method in Contourlet transform domain that is capable of increasing the classification rates up to 4%, as compared to the previously proposed Contourlet based vehicle MMR approach in which the features are non-localized and thus results in sub-optimal classification. Further we show that the proposed algorithm can achieve the increased classification accuracy of 96% at significantly lower computational complexity due to the use of Two Dimensional Linear Discriminant Analysis (2DLDA) for dimensionality reduction by preserving the features with high between-class variance and low inter-class variance.
Hahn, Andrew D; Rowe, Daniel B
2012-02-01
As more evidence is presented suggesting that the phase, as well as the magnitude, of functional MRI (fMRI) time series may contain important information and that there are theoretical drawbacks to modeling functional response in the magnitude alone, removing noise in the phase is becoming more important. Previous studies have shown that retrospective correction of noise from physiologic sources can remove significant phase variance and that dynamic main magnetic field correction and regression of estimated motion parameters also remove significant phase fluctuations. In this work, we investigate the performance of physiologic noise regression in a framework along with correction for dynamic main field fluctuations and motion regression. Our findings suggest that including physiologic regressors provides some benefit in terms of reduction in phase noise power, but it is small compared to the benefit of dynamic field corrections and use of estimated motion parameters as nuisance regressors. Additionally, we show that the use of all three techniques reduces phase variance substantially, removes undesirable spatial phase correlations and improves detection of the functional response in magnitude and phase. Copyright © 2011 Elsevier Inc. All rights reserved.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
flowVS: channel-specific variance stabilization in flow cytometry.
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
RR-Interval variance of electrocardiogram for atrial fibrillation detection
NASA Astrophysics Data System (ADS)
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Word Durations in Non-Native English
Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.
2010-01-01
In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172
DSN system performance test Doppler noise models; noncoherent configuration
NASA Technical Reports Server (NTRS)
Bunce, R.
1977-01-01
The newer model for variance, the Allan technique, now adopted for testing, is analyzed in the subject mode. A model is generated (including considerable contribution from the station secondary frequency standard), and rationalized with existing data. The variance model is definitely sound; the Allan technique mates theory and measure. The mean-frequency model is an estimate; this problem is yet to be rigorously resolved. The unaltered defining expressions are noncovergent, and the observed mean is quite erratic.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Power, Jonathan D; Plitt, Mark; Gotts, Stephen J; Kundu, Prantik; Voon, Valerie; Bandettini, Peter A; Martin, Alex
2018-02-27
"Functional connectivity" techniques are commonplace tools for studying brain organization. A critical element of these analyses is to distinguish variance due to neurobiological signals from variance due to nonneurobiological signals. Multiecho fMRI techniques are a promising means for making such distinctions based on signal decay properties. Here, we report that multiecho fMRI techniques enable excellent removal of certain kinds of artifactual variance, namely, spatially focal artifacts due to motion. By removing these artifacts, multiecho techniques reveal frequent, large-amplitude blood oxygen level-dependent (BOLD) signal changes present across all gray matter that are also linked to motion. These whole-brain BOLD signals could reflect widespread neural processes or other processes, such as alterations in blood partial pressure of carbon dioxide (pCO 2 ) due to ventilation changes. By acquiring multiecho data while monitoring breathing, we demonstrate that whole-brain BOLD signals in the resting state are often caused by changes in breathing that co-occur with head motion. These widespread respiratory fMRI signals cannot be isolated from neurobiological signals by multiecho techniques because they occur via the same BOLD mechanism. Respiratory signals must therefore be removed by some other technique to isolate neurobiological covariance in fMRI time series. Several methods for removing global artifacts are demonstrated and compared, and were found to yield fMRI time series essentially free of motion-related influences. These results identify two kinds of motion-associated fMRI variance, with different physical mechanisms and spatial profiles, each of which strongly and differentially influences functional connectivity patterns. Distance-dependent patterns in covariance are nearly entirely attributable to non-BOLD artifacts.
Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction
NASA Astrophysics Data System (ADS)
Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng
Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Berger, Philip; Messner, Michael J; Crosby, Jake; Vacs Renwick, Deborah; Heinrich, Austin
2018-05-01
Spore reduction can be used as a surrogate measure of Cryptosporidium natural filtration efficiency. Estimates of log10 (log) reduction were derived from spore measurements in paired surface and well water samples in Casper Wyoming and Kearney Nebraska. We found that these data were suitable for testing the hypothesis (H 0 ) that the average reduction at each site was 2 log or less, using a one-sided Student's t-test. After establishing data quality objectives for the test (expressed as tolerable Type I and Type II error rates), we evaluated the test's performance as a function of the (a) true log reduction, (b) number of paired samples assayed and (c) variance of observed log reductions. We found that 36 paired spore samples are sufficient to achieve the objectives over a wide range of variance, including the variances observed in the two data sets. We also explored the feasibility of using smaller numbers of paired spore samples to supplement bioparticle counts for screening purposes in alluvial aquifers, to differentiate wells with large volume surface water induced recharge from wells with negligible surface water induced recharge. With key assumptions, we propose a normal statistical test of the same hypothesis (H 0 ), but with different performance objectives. As few as six paired spore samples appear adequate as a screening metric to supplement bioparticle counts to differentiate wells in alluvial aquifers with large volume surface water induced recharge. For the case when all available information (including failure to reject H 0 based on the limited paired spore data) leads to the conclusion that wells have large surface water induced recharge, we recommend further evaluation using additional paired biweekly spore samples. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando
2017-08-01
Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.
Describing Chinese hospital activity with diagnosis related groups (DRGs). A case study in Chengdu.
Gong, Zhiping; Duckett, Stephen J; Legge, David G; Pei, Likun
2004-07-01
To examine the applicability of an Australian casemix classification system to the description of Chinese hospital activity. A total of 161,478 inpatient episodes from three Chengdu hospitals with demographic, diagnosis, procedure and billing data for the year 1998/1999, 1999/2000 and 2000/2001 were grouped using the Australian refined-diagnosis related groups (AR-DRGs) (version 4.0) grouper. Reduction in variance (R2) and coefficient of variation (CV). Untrimmed reduction in variance (R2) was 0.12 and 0.17 for length of stay (LOS) and cost respectively. After trimming, R2 values were 0.45 and 0.59 for length of stay and cost respectively. The Australian refined DRGs provide a good basis for developing a Chinese grouper.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
48 CFR 9904.407-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering studies, experience, or other supporting data) used in setting and revising standards; the period... their related variances may be recognized either at the time purchases of material are entered into the...-price standards are used and related variances are recognized at the time purchases of material are...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, D; Badano, A; Sempau, J
Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. Themore » weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.« less
ORANGE: a Monte Carlo dose engine for radiotherapy.
van der Zee, W; Hogenbirk, A; van der Marck, S C
2005-02-21
This study presents data for the verification of ORANGE, a fast MCNP-based dose engine for radiotherapy treatment planning. In order to verify the new algorithm, it has been benchmarked against DOSXYZ and against measurements. For the benchmarking, first calculations have been done using the ICCR-XIII benchmark. Next, calculations have been done with DOSXYZ and ORANGE in five different phantoms (one homogeneous, two with bone equivalent inserts and two with lung equivalent inserts). The calculations have been done with two mono-energetic photon beams (2 MeV and 6 MeV) and two mono-energetic electron beams (10 MeV and 20 MeV). Comparison of the calculated data (from DOSXYZ and ORANGE) against measurements was possible for a realistic 10 MV photon beam and a realistic 15 MeV electron beam in a homogeneous phantom only. For the comparison of the calculated dose distributions and dose distributions against measurements, the concept of the confidence limit (CL) has been used. This concept reduces the difference between two data sets to a single number, which gives the deviation for 90% of the dose distributions. Using this concept, it was found that ORANGE was always within the statistical bandwidth with DOSXYZ and the measurements. The ICCR-XIII benchmark showed that ORANGE is seven times faster than DOSXYZ, a result comparable with other accelerated Monte Carlo dose systems when no variance reduction is used. As shown for XVMC, using variance reduction techniques has the potential for further acceleration. Using modern computer hardware, this brings the total calculation time for a dose distribution with 1.5% (statistical) accuracy within the clinical range (less then 10 min). This means that ORANGE can be a candidate for a dose engine in radiotherapy treatment planning.
Determination of the STIS CCD Gain
NASA Astrophysics Data System (ADS)
Riley, Allyssa; Monroe, TalaWanda; Lockwood, Sean
2016-09-01
This report summarizes the analysis and absolute gain results of the STIS Cycle 23 special calibration program 14424 that was designed to measure the gain of amplifiers A, C and D at nominal gain settings of 1 and 4 e-/DN. We used the mean-variance technique and the results indicate a <3.5% change in the gain for amplifier D from when it was originally calculated pre-flight. We compared these values to previous measurements from Cycles 17 through 23. This report outlines the observations, methodology, and results of the mean-variance technique.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Situation awareness measures for simulated submarine track management.
Loft, Shayne; Bowden, Vanessa; Braithwaite, Janelle; Morrell, Daniel B; Huf, Samuel; Durso, Francis T
2015-03-01
The aim of this study was to examine whether the Situation Present Assessment Method (SPAM) and the Situation Awareness Global Assessment Technique (SAGAT) predict incremental variance in performance on a simulated submarine track management task and to measure the potential disruptive effect of these situation awareness (SA) measures. Submarine track managers use various displays to localize and track contacts detected by own-ship sensors. The measurement of SA is crucial for designing effective submarine display interfaces and training programs. Participants monitored a tactical display and sonar bearing-history display to track the cumulative behaviors of contacts in relationship to own-ship position and landmarks. SPAM (or SAGAT) and the Air Traffic Workload Input Technique (ATWIT) were administered during each scenario, and the NASA Task Load Index (NASA-TLX) and Situation Awareness Rating Technique were administered postscenario. SPAM and SAGAT predicted variance in performance after controlling for subjective measures of SA and workload, and SA for past information was a stronger predictor than SA for current/future information. The NASA-TLX predicted performance on some tasks. Only SAGAT predicted variance in performance on all three tasks but marginally increased subjective workload. SPAM, SAGAT, and the NASA-TLX can predict unique variance in submarine track management performance. SAGAT marginally increased subjective workload, but this increase did not lead to any performance decrement. Defense researchers have identified SPAM as an alternative to SAGAT because it would not require field exercises involving submarines to be paused. SPAM was not disruptive, but it is potentially problematic that SPAM did not predict variance in all three performance tasks. © 2014, Human Factors and Ergonomics Society.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Advanced Communication Processing Techniques Held in Ruidoso, New Mexico on 14-17 May 1989
1990-01-01
Criteria: * Prob. of Detection and False Alarm * Variances of Parameter Estimators * Prob. of Correct Classiflcsation and Rejection 0 2 In the exposure...couple of criteria. The tell? [LAUGHTER] If it was anybody else, I standard Neyman-Pearson approach for de- wouldn’t say .... tection, variances for... VARIANCE AISJ11T UPPER AND0 LOWER PMIOUIESOES FEATUE---OELET!U FETUA1E----WW-4A140 TIME SEOLIENTIAL CORRELATION FEATUE -$-ESTIMATED INA FEATURE-ID--LOW
Differences in head impulse test results due to analysis techniques.
Cleworth, Taylor W; Carpenter, Mark G; Honegger, Flurin; Allum, John H J
2017-01-01
Different analysis techniques are used to define vestibulo-ocular reflex (VOR) gain between eye and head angular velocity during the video head impulse test (vHIT). Comparisons would aid selection of gain techniques best related to head impulse characteristics and promote standardisation. Compare and contrast known methods of calculating vHIT VOR gain. We examined lateral canal vHIT responses recorded from 20 patients twice within 13 weeks of acute unilateral peripheral vestibular deficit onset. Ten patients were tested with an ICS Impulse system (GN Otometrics) and 10 with an EyeSeeCam (ESC) system (Interacoustics). Mean gain and variance were computed with area, average sample gain, and regression techniques over specific head angular velocity (HV) and acceleration (HA) intervals. Results for the same gain technique were not different between measurement systems. Area and average sample gain yielded equally lower variances than regression techniques. Gains computed over the whole impulse duration were larger than those computed for increasing HV. Gain over decreasing HV was associated with larger variances. Gains computed around peak HV were smaller than those computed around peak HA. The median gain over 50-70 ms was not different from gain around peak HV. However, depending on technique used, the gain over increasing HV was different from gain around peak HA. Conversion equations between gains obtained with standard ICS and ESC methods were computed. For low gains, the conversion was dominated by a constant that needed to be added to ESC gains to equal ICS gains. We recommend manufacturers standardize vHIT gain calculations using 2 techniques: area gain around peak HA and peak HV.
Uddin, Muhammad Shahin; Halder, Kalyan Kumar; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-11-01
Ultrasound (US) imaging is a widely used clinical diagnostic tool in medical imaging techniques. It is a comparatively safe, economical, painless, portable, and noninvasive real-time tool compared to the other imaging modalities. However, the image quality of US imaging is severely affected by the presence of speckle noise and blur during the acquisition process. In order to ensure a high-quality clinical diagnosis, US images must be restored by reducing their speckle noise and blur. In general, speckle noise is modeled as a multiplicative noise following a Rayleigh distribution and blur as a Gaussian function. Hereto, we propose an intelligent estimator based on artificial neural networks (ANNs) to estimate the variances of noise and blur, which, in turn, are used to obtain an image without discernible distortions. A set of statistical features computed from the image and its complex wavelet sub-bands are used as input to the ANN. In the proposed method, we solve the inverse Rayleigh function numerically for speckle reduction and use the Richardson-Lucy algorithm for de-blurring. The performance of this method is compared with that of the traditional methods by applying them to a synthetic, physical phantom and clinical data, which confirms better restoration results by the proposed method.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
Schiebener, Johannes; Brand, Matthias
2017-06-01
Previous literature has explained older individuals' disadvantageous decision-making under ambiguity in the Iowa Gambling Task (IGT) by reduced emotional warning signals preceding decisions. We argue that age-related reductions in IGT performance may also be explained by reductions in certain cognitive abilities (reasoning, executive functions). In 210 participants (18-86 years), we found that the age-related variance on IGT performance occurred only in the last 60 trials. The effect was mediated by cognitive abilities and their relation with decision-making performance under risk with explicit rules (Game of Dice Task). Thus, reductions in cognitive functions in older age may be associated with both a reduced ability to gain explicit insight into the rules of the ambiguous decision situation and with failure to choose the less risky options consequently after the rules have been understood explicitly. Previous literature may have underestimated the relevance of cognitive functions for age-related decline in decision-making performance under ambiguity.
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.
2016-02-01
Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R
2007-01-01
Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Schroer, William C; Diesfeld, Paul J; Reedy, Mary E; Lemarr, Angela R
2008-06-01
A total of 50 total knee arthroplasty (TKA) patients, 25 traditional and 25 minimally invasive surgical (MIS), underwent computed tomography scans to determine if a loss of accuracy in implant alignment occurred when a surgeon switched from a traditional medial parapatellar arthrotomy to a mini-subvastus surgical technique. Surgical accuracy was determined by comparing the computed tomography measured implant alignment with the surgical alignment goals. There was no loss in accuracy in the implantation of the tibial component with the mini-subvastus technique. The mean variance for the tibial coronal alignment was 1.03 degrees for the traditional TKA and 1.00 degrees for the MIS TKA (P = .183). Similarly, there was no difference in the mean variance for the posterior tibial slope (P = .054). Femoral coronal alignment was less accurate with the MIS procedure, mean variance of 1.04 degrees and 1.71 degrees for the traditional and MIS TKA, respectively (P = .045). Instrumentation and surgical technique concerns that led to this loss in accuracy were determined.
NASA Astrophysics Data System (ADS)
Kamrowski, Ruth L.; Sutton, Stephen G.; Tobin, Renae C.; Hamann, Mark
2014-09-01
Artificial lighting along coastlines poses a significant threat to marine turtles due to the importance of light for their natural orientation at the nesting beach. Effective lighting management requires widespread support and participation, yet engaging the public with light reduction initiatives is difficult because benefits associated with artificial lighting are deeply entrenched within modern society. We present a case study from Queensland, Australia, where an active light-glow reduction campaign has been in place since 2008 to protect nesting turtles. Semi-structured questionnaires explored community beliefs about reducing light and evaluated the potential for using persuasive communication techniques based on the theory of planned behavior (TPB) to increase engagement with light reduction. Respondents ( n = 352) had moderate to strong intentions to reduce light. TPB variables explained a significant proportion of variance in intention (multiple regression: R 2 = 0.54-0.69, P < 0.001), but adding a personal norm variable improved the model ( R 2 = 0.73-0.79, P < 0.001). Significant differences in belief strength between campaign compliers and non-compliers suggest that targeting the beliefs reducing light leads to "increased protection of local turtles" ( P < 0.01) and/or "benefits to the local economy" ( P < 0.05), in combination with an appeal to personal norms, would produce the strongest persuasion potential for future communications. Selective legislation and commitment strategies may be further useful strategies to increase community light reduction. As artificial light continues to gain attention as a pollutant, our methods and findings will be of interest to anyone needing to manage public artificial lighting.
Kamrowski, Ruth L; Sutton, Stephen G; Tobin, Renae C; Hamann, Mark
2014-09-01
Artificial lighting along coastlines poses a significant threat to marine turtles due to the importance of light for their natural orientation at the nesting beach. Effective lighting management requires widespread support and participation, yet engaging the public with light reduction initiatives is difficult because benefits associated with artificial lighting are deeply entrenched within modern society. We present a case study from Queensland, Australia, where an active light-glow reduction campaign has been in place since 2008 to protect nesting turtles. Semi-structured questionnaires explored community beliefs about reducing light and evaluated the potential for using persuasive communication techniques based on the theory of planned behavior (TPB) to increase engagement with light reduction. Respondents (n = 352) had moderate to strong intentions to reduce light. TPB variables explained a significant proportion of variance in intention (multiple regression: R (2) = 0.54-0.69, P < 0.001), but adding a personal norm variable improved the model (R (2) = 0.73-0.79, P < 0.001). Significant differences in belief strength between campaign compliers and non-compliers suggest that targeting the beliefs reducing light leads to "increased protection of local turtles" (P < 0.01) and/or "benefits to the local economy" (P < 0.05), in combination with an appeal to personal norms, would produce the strongest persuasion potential for future communications. Selective legislation and commitment strategies may be further useful strategies to increase community light reduction. As artificial light continues to gain attention as a pollutant, our methods and findings will be of interest to anyone needing to manage public artificial lighting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Juntavee, Niwut; Juntavee, Apa; Saensutthawijit, Phuwiwat
2018-02-01
This study evaluated the effect of light-emitting diode (LED) illumination bleaching technique on the surface nanohardness of various computer-aided design and computer-aided manufacturing (CAD/CAM) ceramic materials. Twenty disk-shaped samples (width, length, and thickness = 10, 15, and 2 mm) were prepared from each of the ceramic materials for CAD/CAM, including Lava™ Ultimate (L V ), Vita Enamic® (E n ) IPS e.max® CAD (M e ), inCoris® TZI (I C ), and Prettau® zirconia (P r ). The samples from each type of ceramic were randomly divided into two groups based on the different bleaching techniques to be used on them, using 35% hydrogen peroxide with and without LED illumination. The ceramic disk samples were bleached according to the manufacturer's instruction. Surface hardness test was performed before and after bleaching using nanohardness tester with a Berkovich diamond indenter. The respective Vickers hardness number upon no bleaching and bleaching without or with LED illumination [mean ± standard deviation (SD)] for each type of ceramic were as follows: 102.52 ± 2.09, 101.04 ± 1.18, and 98.17 ± 1.15 for L V groups; 274.96 ± 5.41, 271.29 ± 5.94, and 268.20 ± 7.02 for E n groups; 640.74 ± 31.02, 631.70 ± 22.38, and 582.32 ± 33.88 for M e groups; 1,442.09 ± 35.07, 1,431.32 ± 28.80, and 1,336.51 ± 34.03 for I C groups; and 1,383.82 ± 33.87, 1,343.51 ± 38.75, and 1,295.96 ± 31.29 for P r groups. The results indicated surface hardness reduction following the bleaching procedure of varying degrees for different ceramic materials. Analysis of variance (ANOVA) revealed a significant reduction in surface hardness due to the effect of bleaching technique, ceramic material, and the interaction between bleaching technique and ceramic material (p < 0.05). Bleaching resulted in a diminution of the surface hardness of dental ceramic for CAD/CAM. Using 35% hydrogen peroxide bleaching agent with LED illumination exhibited more reduction in surface hardness of dental ceramic than what was observed without LED illumination. Clinicians should consider protection of the existing restoration while bleaching.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Uechi, Ken; Asakura, Keiko; Masayasu, Shizuko; Sasaki, Satoshi
2017-06-01
Salt intake in Japan remains high; therefore, exploring within-country variation in salt intake and its cause is an important step in the establishment of salt reduction strategies. However, no nationwide evaluation of this variation has been conducted by urinalysis. We aimed to clarify whether within-country variation in salt intake exists in Japan after adjusting for individual characteristics. Healthy men (n=1027) and women (n=1046) aged 20-69 years were recruited from all 47 prefectures of Japan. Twenty-four-hour sodium excretion was estimated using three spot urine samples collected on three nonconsecutive days. The study area was categorized into 12 regions defined by the National Health and Nutrition Survey Japan. Within-country variation in sodium excretion was estimated as a population (region)-level variance using a multilevel model with random intercepts, with adjustment for individual biological, socioeconomic and dietary characteristics. Estimated 24 h sodium excretion was 204.8 mmol per day in men and 155.7 mmol per day in women. Sodium excretion was high in the Northeastern region. However, population-level variance was extremely small after adjusting for individual characteristics (0.8 and 2% of overall variance in men and women, respectively) compared with individual-level variance (99.2 and 98% of overall variance in men and women, respectively). Among individual characteristics, greater body mass index, living with a spouse and high miso-soup intake were associated with high sodium excretion in both sexes. Within-country variation in salt intake in Japan was extremely small compared with individual-level variation. Salt reduction strategies for Japan should be comprehensive and should not address the small within-country differences in intake.
Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils
Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.
2008-01-01
Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391
Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude
2003-01-01
Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf
2012-01-01
Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
Dynamic Repertoire of Intrinsic Brain States Is Reduced in Propofol-Induced Unconsciousness
Liu, Xiping; Pillay, Siveshigan
2015-01-01
Abstract The richness of conscious experience is thought to scale with the size of the repertoire of causal brain states, and it may be diminished in anesthesia. We estimated the state repertoire from dynamic analysis of intrinsic functional brain networks in conscious sedated and unconscious anesthetized rats. Functional resonance images were obtained from 30-min whole-brain resting-state blood oxygen level-dependent (BOLD) signals at propofol infusion rates of 20 and 40 mg/kg/h, intravenously. Dynamic brain networks were defined at the voxel level by sliding window analysis of regional homogeneity (ReHo) or coincident threshold crossings (CTC) of the BOLD signal acquired in nine sagittal slices. The state repertoire was characterized by the temporal variance of the number of voxels with significant ReHo or positive CTC. From low to high propofol dose, the temporal variances of ReHo and CTC were reduced by 78%±20% and 76%±20%, respectively. Both baseline and propofol-induced reduction of CTC temporal variance increased from lateral to medial position. Group analysis showed a 20% reduction in the number of unique states at the higher propofol dose. Analysis of temporal variance in 12 anatomically defined regions of interest predicted that the largest changes occurred in visual cortex, parietal cortex, and caudate-putamen. The results suggest that the repertoire of large-scale brain states derived from the spatiotemporal dynamics of intrinsic networks is substantially reduced at an anesthetic dose associated with loss of consciousness. PMID:24702200
A Quantitative Evaluation of SCEC Community Velocity Model Version 3.0
NASA Astrophysics Data System (ADS)
Chen, P.; Zhao, L.; Jordan, T. H.
2003-12-01
We present a systematic methodology for evaluating and improving 3D seismic velocity models using broadband waveform data from regional earthquakes. The operator that maps a synthetic waveform into an observed waveform is expressed in the Rytov form D(ω ) = {exp}[{i} ω δ τ {p}(ω ) - ω δ τ {q}(ω )]. We measure the phase delay time δ τ p(ω ) and the amplitude reduction time δ τ q(ω ) as a function of frequency ω using Gee & Jordan's [1992] isolation-filter technique, and we correct the data for frequency-dependent interference and frequency-independent source statics. We have applied this procedure to a set of small events in Southern California. Synthetic seismograms were computed using three types of velocity models: the 1D Standard Southern California Crustal Model (SoCaL) [Dreger & Helmberger, 1993], the 3D SCEC Community Velocity Model, Version 3.0 (CVM3.0) [Magistrale et al., 2000], and a set of path-averaged 1D models (A1D) extracted from CVM3.0 by horizontally averaging wave slownesses along source-receiver paths. The 3D synthetics were computed using K. Olsen's finite difference code. More than 1000 measurements were made on both P and S waveforms at frequencies ranging from 0.2 to 1 Hz. Overall, the 3D model provided a substantially better fit to the waveform data than either laterally homogeneous or path-dependent 1D models. Relative to SoCaL, CVM3.0 provided a variance reduction of about 64% in δ τ p, and 41% in δ τ q. Relative to A1D, the variance reduction is about 46% and 20%, respectively. The same set of measurements can be employed to invert for both seismic source properties and seismic velocity structures. Fully numerical methods are being developed to compute the Fréchet kernels for these measurements [L. Zhao et. al., this meeting]. This methodology thus provides a unified framework for regional studies of seismic sources and Earth structure in Southern California and elsewhere.
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Hemmat, Shirin M.; Wang, Steven J.; Ryan, William R.
2016-01-01
Introduction Neck dissection (ND) technique preferences are not well reported. Objective The objective of this study is to educate practitioners and trainees about surgical technique commonality and variance used by head and neck oncologic surgeons when performing a ND. Methods Online survey of surgeon members of the American Head and Neck Society (AHNS). Survey investigated respondents' demographic information, degree of surgical experience, ND technique preferences. Results In our study, 283 out of 1,010 (28%) AHNS surgeon members with a mean age of 50.3 years (range 32–77 years) completed surveys from 41 states and 24 countries. We found that 205 (72.4%) had completed a fellowship in head and neck surgical oncology. Also, 225 (79.5%) respondents reported completing more than 25 NDs per year. ND technique commonalities (>66% respondents) included: preserving level 5 (unless with suspicious lymph nodes (LN)), only excising the portion of sternocleidomastoid muscle involved with tumor, resecting lymphatic tissue en bloc, preservation of cervical sensory rootlets, not performing submandibular gland (SMG) transfer, placing one drain for unilateral selective NDs, and performing a ND after parotidectomy and thyroidectomy and before transcervical approaches to upper aerodigestive tract primary site. Variability existed in the sequence of LN levels excised, instrument preferences, criteria for drain removal, the timing of a ND with transoral upper aerodigestive tract primary site resections, and submandibular gland preservation. Results showed that 122 (43.1%) surgeons reported that they preserve the submandibular gland during the level 1b portion of a ND. Conclusions The commonalities and variances reported for the ND technique may help put individual preferences into context. PMID:28050201
Demixed principal component analysis of neural population data.
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-04-12
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-01
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
Evaluation of three lidar scanning strategies for turbulence measurements
NASA Astrophysics Data System (ADS)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas
2016-05-01
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.
LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints
NASA Technical Reports Server (NTRS)
Swei, Sean S.M.; Ayoubi, Mohammad A.
2017-01-01
This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane L.; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Uncertainty estimation and multi sensor fusion for kinematic laser tracker measurements
NASA Astrophysics Data System (ADS)
Ulrich, Thomas
2013-08-01
Laser trackers are widely used to measure kinematic tasks such as tracking robot movements. Common methods to evaluate the uncertainty in the kinematic measurement include approximations specified by the manufacturers, various analytical adjustment methods and the Kalman filter. In this paper a new, real-time technique is proposed, which estimates the 4D-path (3D-position + time) uncertainty of an arbitrary path in space. Here a hybrid system estimator is applied in conjunction with the kinematic measurement model. This method can be applied to processes, which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. The new approach is compared with the Kalman filter and a manufacturer's approximations. The comparison was made using data obtained by tracking an industrial robot's tool centre point with a Leica laser tracker AT901 and a Leica laser tracker LTD500. It shows that the new approach is more appropriate to analysing kinematic processes than the Kalman filter, as it reduces overshoots and decreases the estimated variance. In comparison with the manufacturer's approximations, the new approach takes account of kinematic behaviour with an improved description of the real measurement process and a reduction in estimated variance. This approach is therefore well suited to the analysis of kinematic processes with unknown changes in kinematic behaviour as well as the fusion among laser trackers.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar
NASA Technical Reports Server (NTRS)
Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.
2012-01-01
In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.
An Evolutionary Perspective on Epistasis and the Missing Heritability
Hemani, Gibran; Knott, Sara; Haley, Chris
2013-01-01
The relative importance between additive and non-additive genetic variance has been widely argued in quantitative genetics. By approaching this question from an evolutionary perspective we show that, while additive variance can be maintained under selection at a low level for some patterns of epistasis, the majority of the genetic variance that will persist is actually non-additive. We propose that one reason that the problem of the “missing heritability” arises is because the additive genetic variation that is estimated to be contributing to the variance of a trait will most likely be an artefact of the non-additive variance that can be maintained over evolutionary time. In addition, it can be shown that even a small reduction in linkage disequilibrium between causal variants and observed SNPs rapidly erodes estimates of epistatic variance, leading to an inflation in the perceived importance of additive effects. We demonstrate that the perception of independent additive effects comprising the majority of the genetic architecture of complex traits is biased upwards and that the search for causal variants in complex traits under selection is potentially underpowered by parameterising for additive effects alone. Given dense SNP panels the detection of causal variants through genome-wide association studies may be improved by searching for epistatic effects explicitly. PMID:23509438
Lowthian, P; Disler, P; Ma, S; Eagar, K; Green, J; de Graaff, S
2000-10-01
To investigate whether the Australian National Sub-acute and Non-acute Patient Casemix Classification (SNAP) and Functional Independence Measure and Functional Related Group (Version 2) (FIM-FRG2) casemix systems can be used to predict functional outcome, and reduce the variance of length of stay (LOS) of patients undergoing rehabilitation after strokes. The study comprised a retrospective analysis of the records of patients admitted to the Cedar Court Healthsouth Rehabilitation Hospital for rehabilitation after stroke. The sample included 547 patients (83.3% of those admitted with stroke during this period). Patient data were stratified for analysis into the five SNAP or nine FIM-FRG2 groups, on the basis of the admission FIM scores and age. The AN-SNAP classification accounted for a 30.7% reduction of the variance of LOS, and 44.2% of motor FIM, and the FIM-FRG2 accounts for 33.5% and 56.4% reduction respectively. Comparison of the Cedar Court with the national AN-SNAP data showed differences in the LOS and functional outcomes of older, severely disabled patients. Intensive rehabilitation in selected patients of this type appears to have positive effects, albeit with a slightly longer period of inpatient rehabilitation. Casemix classifications can be powerful management tools. Although FIM-FRG2 accounts for more reduction in variance than SNAP, division into nine groups meant that some contained few subjects. This paper supports the introduction of AN-SNAP as the standard casemix tool for rehabilitation in Australia, which will hopefully lead to rational, adequate funding of the rehabilitation phase of care.
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..
Evaluation of three lidar scanning strategies for turbulence measurements
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...
2016-05-03
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Evaluation of three lidar scanning strategies for turbulence measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia
Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less
Relationship between extrinsic factors and the acromio-humeral distance.
Mackenzie, Tanya Anne; Herrington, Lee; Funk, Lenard; Horsley, Ian; Cools, Ann
2016-06-01
Maintenance of the subacromial space is important in impingement syndromes. Research exploring the correlation between biomechanical factors and the subacromial space would be beneficial. To establish if relationship exists between the independent variables of scapular rotation, shoulder internal rotation, shoulder external rotation, total arc of shoulder rotation, pectoralis minor length, thoracic curve, and shoulder activity level with the dependant variables: AHD in neutral, AHD in 60° arm abduction, and percentage reduction in AHD. Controlled laboratory study. Data from 72 male control shoulders (24.28years STD 6.81 years) and 186 elite sportsmen's shoulders (25.19 STD 5.17 years) were included in the analysis. The independent variables were quantified and real time ultrasound was used to measure the dependant variable acromio-humeral distance. Shoulder internal rotation and pectoralis minor length, explained 8% and 6% respectively of variance in acromio-humeral distance in neutral. Pectoralis minor length accounted for 4% of variance in 60° arm abduction. Total arc of rotation, shoulder external rotation range, and shoulder activity levels explained 9%, 15%, and 16%-29% of variance respectively in percentage reduction in acromio-humeral distance during arm abduction to 60°. Pectorals minor length, shoulder rotation ranges, total arc of shoulder rotation, and shoulder activity levels were found to have weak to moderate relationships with acromio-humeral distance. Existence and strength of relationship was population specific and dependent on arm position. Relationships only accounted for small variances in AHD indicating that in addition to these factors there are other factors involved in determining AHD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Assessment of wear dependence parameters in complex model of cutting tool wear
NASA Astrophysics Data System (ADS)
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
Statistical classification techniques for engineering and climatic data samples
NASA Technical Reports Server (NTRS)
Temple, E. C.; Shipman, J. R.
1981-01-01
Fisher's sample linear discriminant function is modified through an appropriate alteration of the common sample variance-covariance matrix. The alteration consists of adding nonnegative values to the eigenvalues of the sample variance covariance matrix. The desired results of this modification is to increase the number of correct classifications by the new linear discriminant function over Fisher's function. This study is limited to the two-group discriminant problem.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
NASA Astrophysics Data System (ADS)
Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.
2018-02-01
Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
K-Fold Crossvalidation in Canonical Analysis.
ERIC Educational Resources Information Center
Liang, Kun-Hsia; And Others
1995-01-01
A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)
NASA Technical Reports Server (NTRS)
Jennings, W. P.; Olsen, N. L.; Walter, M. J.
1976-01-01
The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Are the Stress Drops of Small Earthquakes Good Predictors of the Stress Drops of Larger Earthquakes?
NASA Astrophysics Data System (ADS)
Hardebeck, J.
2017-12-01
Uncertainty in PSHA could be reduced through better estimates of stress drop for possible future large earthquakes. Studies of small earthquakes find spatial variability in stress drop; if large earthquakes have similar spatial patterns, their stress drops may be better predicted using the stress drops of small local events. This regionalization implies the variance with respect to the local mean stress drop may be smaller than the variance with respect to the global mean. I test this idea using the Shearer et al. (2006) stress drop catalog for M1.5-3.1 events in southern California. I apply quality control (Hauksson, 2015) and remove near-field aftershocks (Wooddell & Abrahamson, 2014). The standard deviation of the distribution of the log10 stress drop is reduced from 0.45 (factor of 3) to 0.31 (factor of 2) by normalizing each event's stress drop by the local mean. I explore whether a similar variance reduction is possible when using the Shearer catalog to predict stress drops of larger southern California events. For catalogs of moderate-sized events (e.g. Kanamori, 1993; Mayeda & Walter, 1996; Boyd, 2017), normalizing by the Shearer catalog's local mean stress drop does not reduce the standard deviation compared to the unmodified stress drops. I compile stress drops of larger events from the literature, and identify 15 M5.5-7.5 earthquakes with at least three estimates. Because of the wide range of stress drop estimates for each event, and the different techniques and assumptions, it is difficult to assign a single stress drop value to each event. Instead, I compare the distributions of stress drop estimates for pairs of events, and test whether the means of the distributions are statistically significantly different. The events divide into 3 categories: low, medium, and high stress drop, with significant differences in mean stress drop between events in the low and the high stress drop categories. I test whether the spatial patterns of the Shearer catalog stress drops can predict the categories of the 15 events. I find that they cannot, rather the large event stress drops are uncorrelated with the local mean stress drop from the Shearer catalog. These results imply that the regionalization of stress drops of small events does not extend to the larger events, at least with current standard techniques of stress drop estimation.
Ronald E. McRoberts; Erkki O. Tomppo; Andrew O. Finley; Heikkinen Juha
2007-01-01
The k-Nearest Neighbor (k-NN) technique has become extremely popular for a variety of forest inventory mapping and estimation applications. Much of this popularity may be attributed to the non-parametric, multivariate features of the technique, its intuitiveness, and its ease of use. When used with satellite imagery and forest...
Using Analytical Techniques to Interpret Financial Statements.
ERIC Educational Resources Information Center
Walters, Donald L.
1986-01-01
Summarizes techniques for interpreting the balance sheet and the statement of revenues, expenditures, and changes-in-fund-balance sections of the comprehensive annual financial report required of all school districts. Uses three tables to show intricacies involved and focuses on analyzing favorable and unfavorable budget variances. (MLH)
Fractal structures and fractal functions as disease indicators
Escos, J.M; Alados, C.L.; Emlen, J.M.
1995-01-01
Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Analysis and comparison of the biomechanical properties of univalved and bivalved cast models.
Crickard, Colin V; Riccio, Anthony I; Carney, Joseph R; Anderson, Terrence D
2011-01-01
Fiberglass casts are frequently valved to relieve the pressure associated with upper extremity swelling after a surgical procedure or when applied after reduction of a displaced fracture in a child. Although different opinions exist regarding the valving of casts, no research to date has explored the biomechanical effects of this commonly used technique. As cast integrity is essential for the maintenance of fracture reduction, it is important to understand whether casts are structurally compromised after valving. Understanding the effects of valving on cast integrity may help guide clinicians in the technique of valving while minimizing the potential for a loss of fracture reduction. Thirty standardized cylindrical fiberglass cast models were created. Ten models were left intact, 10 were univalved, and 10 were bivalved. All the models were mechanically tested by a 3-point bending apparatus secured to a biaxial materials testing system. Load to failure and bending stiffness were recorded for each sample. Differences in load of failure and bending stiffness were compared among the groups. Unvalved cast models had the highest failure load and bending stiffness, whereas bivalved casts showed the lowest value for both failure load and bending stiffness. Univalved casts had a failure load measured to be between those of unvalved and bivalved cast models. Analysis of variance showed significance when failure load and bending stiffness data among all the groups were compared. A post hoc Bonferroni statistical analysis showed significance in bending stiffness between intact and bivalved models (P < 0.01), intact and univalved models (P < 0.01), but no significant difference in bending stiffness between univalved and bivalved models (P > 0.01). Differences in measured failure load values were found to be statistically significant among all cast models (P < 0.01). Valving significantly decreases the bending stiffness and load to failure of fiberglass casts. Univalved casts have a higher load to failure than bivalved casts. Valving adversely alters the structural integrity of fiberglass casts. This may impair a cast's ability to effectively immobilize an extremity or maintain a fracture reduction.
A Technique for Developing Probabilistic Properties of Earth Materials
1988-04-01
Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading
Transforming RNA-Seq data to improve the performance of prognostic gene signatures.
Zwiener, Isabella; Frisch, Barbara; Binder, Harald
2014-01-01
Gene expression measurements have successfully been used for building prognostic signatures, i.e for identifying a short list of important genes that can predict patient outcome. Mostly microarray measurements have been considered, and there is little advice available for building multivariable risk prediction models from RNA-Seq data. We specifically consider penalized regression techniques, such as the lasso and componentwise boosting, which can simultaneously consider all measurements and provide both, multivariable regression models for prediction and automated variable selection. However, they might be affected by the typical skewness, mean-variance-dependency or extreme values of RNA-Seq covariates and therefore could benefit from transformations of the latter. In an analytical part, we highlight preferential selection of covariates with large variances, which is problematic due to the mean-variance dependency of RNA-Seq data. In a simulation study, we compare different transformations of RNA-Seq data for potentially improving detection of important genes. Specifically, we consider standardization, the log transformation, a variance-stabilizing transformation, the Box-Cox transformation, and rank-based transformations. In addition, the prediction performance for real data from patients with kidney cancer and acute myeloid leukemia is considered. We show that signature size, identification performance, and prediction performance critically depend on the choice of a suitable transformation. Rank-based transformations perform well in all scenarios and can even outperform complex variance-stabilizing approaches. Generally, the results illustrate that the distribution and potential transformations of RNA-Seq data need to be considered as a critical step when building risk prediction models by penalized regression techniques.
Transforming RNA-Seq Data to Improve the Performance of Prognostic Gene Signatures
Zwiener, Isabella; Frisch, Barbara; Binder, Harald
2014-01-01
Gene expression measurements have successfully been used for building prognostic signatures, i.e for identifying a short list of important genes that can predict patient outcome. Mostly microarray measurements have been considered, and there is little advice available for building multivariable risk prediction models from RNA-Seq data. We specifically consider penalized regression techniques, such as the lasso and componentwise boosting, which can simultaneously consider all measurements and provide both, multivariable regression models for prediction and automated variable selection. However, they might be affected by the typical skewness, mean-variance-dependency or extreme values of RNA-Seq covariates and therefore could benefit from transformations of the latter. In an analytical part, we highlight preferential selection of covariates with large variances, which is problematic due to the mean-variance dependency of RNA-Seq data. In a simulation study, we compare different transformations of RNA-Seq data for potentially improving detection of important genes. Specifically, we consider standardization, the log transformation, a variance-stabilizing transformation, the Box-Cox transformation, and rank-based transformations. In addition, the prediction performance for real data from patients with kidney cancer and acute myeloid leukemia is considered. We show that signature size, identification performance, and prediction performance critically depend on the choice of a suitable transformation. Rank-based transformations perform well in all scenarios and can even outperform complex variance-stabilizing approaches. Generally, the results illustrate that the distribution and potential transformations of RNA-Seq data need to be considered as a critical step when building risk prediction models by penalized regression techniques. PMID:24416353
Umegaki, Hiroyuki; Yanagawa, Madoka; Nonogaki, Zen; Nakashima, Hirotaka; Kuzuya, Masafumi; Endo, Hidetoshi
2014-01-01
We surveyed the care burden of family caregivers, their satisfaction with the services, and whether their care burden was reduced by the introduction of the LTCI care services. We randomly enrolled 3000 of 43,250 residents of Nagoya City aged 65 and over who had been certified as requiring long-term care and who used at least one type of service provided by the public LTCI; 1835 (61.2%) subjects returned the survey. A total of 1015 subjects for whom complete sets of data were available were employed for statistical analysis. Analysis of variance for the continuous variables and χ(2) analysis for that categorical variance were performed. Multiple logistic analysis was performed with the factors with p values of <0.2 in the χ(2) analysis of burden reduction. A total of 68.8% of the caregivers indicated that the care burden was reduced by the introduction of the LTCI care services, and 86.8% of the caregivers were satisfied with the LTCI care services. A lower age of caregivers, a more advanced need classification level, and more satisfaction with the services were independently associated with a reduction of the care burden. In Japanese LTCI, the overall satisfaction of the caregivers appears to be relatively high and is associated with the reduction of the care burden. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Prospects for discovering pulsars in future continuum surveys using variance imaging
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Hobbs, G.
2017-12-01
In our previous paper, we developed a formalism for computing variance images from standard, interferometric radio images containing time and frequency information. Variance imaging with future radio continuum surveys allows us to identify radio pulsars and serves as a complement to conventional pulsar searches that are most sensitive to strictly periodic signals. Here, we carry out simulations to predict the number of pulsars that we can uncover with variance imaging in future continuum surveys. We show that the Australian SKA Pathfinder (ASKAP) Evolutionary Map of the Universe (EMU) survey can find ∼30 normal pulsars and ∼40 millisecond pulsars (MSPs) over and above the number known today, and similarly an all-sky continuum survey with SKA-MID can discover ∼140 normal pulsars and ∼110 MSPs with this technique. Variance imaging with EMU and SKA-MID will detect pulsars with large duty cycles and is therefore a potential tool for finding MSPs and pulsars in relativistic binary systems. Compared with current pulsar surveys at high Galactic latitudes in the Southern hemisphere, variance imaging with EMU and SKA-MID will be more sensitive, and will enable detection of pulsars with dispersion measures between ∼10 and 100 cm-3 pc.
Wing download reduction using vortex trapping plates
NASA Technical Reports Server (NTRS)
Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.
1994-01-01
A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Sakamoto, Sadanori; Iguchi, Masaki
2018-06-08
Less attention to a balance task reduces the center of foot pressure (COP) variability by automating the task. However, it is not fully understood how the degree of postural automaticity influences the voluntary movement and anticipatory postural adjustments. Eleven healthy young adults performed a bipedal, eyes closed standing task under the three conditions: Control (C, standing task), Single (S, standing + reaction tasks), and Dual (D, standing + reaction + mental tasks). The reaction task was flexing the right shoulder to an auditory stimulus, which causes counter-clockwise rotational torque, and the mental task was arithmetic task. The COP variance before the reaction task was reduced in the D condition compared to that in the C and S conditions. On average the onsets of the arm movement and the vertical torque (Tz, anticipatory clockwise rotational torque) were both delayed, and the maximal Tz slope (the rate at which the torque develops) became less steep in the D condition compared to those in the S condition. When these data in the D condition were expressed as a percentage of those in the S condition, the arm movement onset and the Tz slope were positively and negatively, respectively, correlated with the COP variance. By using the mental-task induced COP variance reduction as the indicator of postural automaticity, our data suggest that the balance task for those with more COP variance reduction is less cognitively demanding, leading to the shorter reaction time probably due to the attention shift from the automated balance task to the reaction task. Copyright © 2018 Elsevier B.V. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
An unsupervised classification technique for multispectral remote sensing data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Cummings, R. E.
1973-01-01
Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.
Amar, Eyal; Maman, Eran; Khashan, Morsi; Kauffman, Ehud; Rath, Ehud; Chechik, Ofir
2012-11-01
The shoulder is regarded as the most commonly dislocated major joint in the human body. Most dislocations can be reduced by simple methods in the emergency department, whereas others require more complicated approaches. We compared the efficacy, safety, pain, and duration of the reduction between the Milch technique and the Stimson technique in treating dislocations. We also identified factors that affected success rate. All enrolled patients were randomized to either the Milch technique or the Stimson technique for dislocated shoulder reduction. The study cohort consisted of 60 patients (mean age, 43.9 years; age range, 18-88 years) who were randomly assigned to treatment by either the Stimson technique (n = 25) or the Milch technique (n = 35). Oral analgesics were available for both groups. The 2 groups were similar in demographics, patient characteristics, and pain levels. The first reduction attempt in the Milch and Stimson groups was successful in 82.8% and 28% of cases, respectively (P < .001), and the mean reduction time was 4.68 and 8.84 minutes, respectively (P = .007). The success rate was found to be affected by the reduction technique, the interval between dislocation occurrence and first reduction attempt, and the pain level on admittance. The success rate and time to achieve reduction without sedation were superior for the Milch technique compared with the Stimson technique. Early implementation of reduction measures and low pain levels at presentation favor successful reduction, which--in combination with oral pain medication--constitutes an acceptable and reasonable management alternative to reduction with sedation. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Portnoy, Orith; Guranda, Larisa; Apter, Sara; Eiss, David; Amitai, Marianne Michal; Konen, Eli
2011-11-01
The purpose of this study was to compare opacification of the urinary collecting system and radiation dose associated with three-phase 64-MDCT urographic protocols and those associated with a split-bolus dual-phase protocol including furosemide. Images from 150 CT urographic examinations performed with three scanning protocols were retrospectively evaluated. Group A consisted of 50 sequentially registered patients who underwent a three-phase protocol with saline infusion. Group B consisted of 50 sequentially registered patients who underwent a reduced-radiation three-phase protocol with saline. Group C consisted of 50 sequentially registered patients who underwent a dual-phase split-bolus protocol that included a low-dose furosemide injection. Opacification of the urinary collecting system was evaluated with segmental binary scoring. Contrast artifacts were evaluated, and radiation doses were recorded. Results were compared by analysis of variance. A significant reduction in mean effective radiation dose was found between groups A and B (p < 0.001) and between groups B and C (p < 0.001), resulting in 65% reduction between groups A and C (p < 0.001). This reduction did not significantly affect opacification score in any of the 12 urinary segments (p = 0.079). In addition, dense contrast artifacts overlying the renal parenchyma observed with the three-phase protocols (groups A and B) were avoided with the dual-phase protocol (group C) (p < 0.001). A dual-phase protocol with furosemide injection is the preferable technique for CT urography. In comparison with commonly used three-phase protocols, the dual-phase protocol significantly reduces radiation exposure dose without reduction in image quality.
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Representativeness of laboratory sampling procedures for the analysis of trace metals in soil.
Dubé, Jean-Sébastien; Boudreault, Jean-Philippe; Bost, Régis; Sona, Mirela; Duhaime, François; Éthier, Yannic
2015-08-01
This study was conducted to assess the representativeness of laboratory sampling protocols for purposes of trace metal analysis in soil. Five laboratory protocols were compared, including conventional grab sampling, to assess the influence of sectorial splitting, sieving, and grinding on measured trace metal concentrations and their variability. It was concluded that grinding was the most important factor in controlling the variability of trace metal concentrations. Grinding increased the reproducibility of sample mass reduction by rotary sectorial splitting by up to two orders of magnitude. Combined with rotary sectorial splitting, grinding increased the reproducibility of trace metal concentrations by almost three orders of magnitude compared to grab sampling. Moreover, results showed that if grinding is used as part of a mass reduction protocol by sectorial splitting, the effect of sieving on reproducibility became insignificant. Gy's sampling theory and practice was also used to analyze the aforementioned sampling protocols. While the theoretical relative variances calculated for each sampling protocol qualitatively agreed with the experimental variances, their quantitative agreement was very poor. It was assumed that the parameters used in the calculation of theoretical sampling variances may not correctly estimate the constitutional heterogeneity of soils or soil-like materials. Finally, the results have highlighted the pitfalls of grab sampling, namely, the fact that it does not exert control over incorrect sampling errors and that it is strongly affected by distribution heterogeneity.
Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach
NASA Astrophysics Data System (ADS)
Kumral, Mustafa; Ozer, Umit
2013-03-01
Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.
SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,
COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.
Methods for Improving Information from ’Undesigned’ Human Factors Experiments.
Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?
Code of Federal Regulations, 2010 CFR
2010-07-01
... subpart for a national primary drinking water regulation for a microbial contaminant (including a bacterium, virus, or other organism) or an indicator or treatment technique for a microbial contaminant. (b... requirement specifying a maximum contaminant level or treatment technique for a contaminant with respect to...
Variance Reduction in Simulation Experiments: A Mathematical-Statistical Framework.
1983-12-01
Handscomb (1964), Granovsky (1981), Rubinstein (1981), and Wilson (1983b). The use of conditional expectations (CE) will be described as the term is...8217- .. - - -f -. ""."-.-.’-..’.." . . ......... . -. . . --...... •- " --- . 106 Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte
Genetic and environmental influences on blood pressure variability: a study in twins.
Xu, Xiaojing; Ding, Xiuhua; Zhang, Xinyan; Su, Shaoyong; Treiber, Frank A; Vlietinck, Robert; Fagard, Robert; Derom, Catherine; Gielen, Marij; Loos, Ruth J F; Snieder, Harold; Wang, Xiaoling
2013-04-01
Blood pressure variability (BPV) and its reduction in response to antihypertensive treatment are predictors of clinical outcomes; however, little is known about its heritability. In this study, we examined the relative influence of genetic and environmental sources of variance of BPV and the extent to which it may depend on race or sex in young twins. Twins were enrolled from two studies. One study included 703 white twins (308 pairs and 87 singletons) aged 18-34 years, whereas another study included 242 white twins (108 pairs and 26 singletons) and 188 black twins (79 pairs and 30 singletons) aged 12-30 years. BPV was calculated from 24-h ambulatory blood pressure recording. Twin modeling showed similar results in the separate analysis in both twin studies and in the meta-analysis. Familial aggregation was identified for SBP variability (SBPV) and DBP variability (DBPV) with genetic factors and common environmental factors together accounting for 18-40% and 23-31% of the total variance of SBPV and DBPV, respectively. Unique environmental factors were the largest contributor explaining up to 82-77% of the total variance of SBPV and DBPV. No sex or race difference in BPV variance components was observed. The results remained the same after adjustment for 24-h blood pressure levels. The variance in BPV is predominantly determined by unique environment in youth and young adults, although familial aggregation due to additive genetic and/or common environment influences was also identified explaining about 25% of the variance in BPV.
Efficacy of Chinese auriculotherapy for stress in nursing staff: a randomized clinical trial
Kurebayashi, Leonice Fumiko Sato; da Silva, Maria Júlia Paes
2014-01-01
Objective this randomized single blind clinical study aimed to evaluate the efficacy of auriculotherapy with and without a protocol for reducing stress levels among nursing staff. Method a total of 175 nursing professionals with medium and high scores according to Vasconcelos' Stress Symptoms List were divided into 3 groups: Control (58), Group with protocol (58), Group with no protocol (59). They were assessed at the baseline, after 12 sessions, and at the follow-up (30 days). Results in the analysis of variance, statistically significant differences between the Control and Intervention groups were found in the two evaluations (p<0.05) with greater size of effect indices (Cohen) for the No protocol group. The Yang Liver 1 and 2, Kidney, Brain Stem and Shen Men were the points most used. Conclusion individualized auriculotherapy, with no protocol, could expand the scope of the technique for stress reduction compared with auriculotherapy with a protocol. NCT: 01420835 PMID:25029046
Demixed principal component analysis of neural population data
Kobak, Dmitry; Brendel, Wieland; Constantinidis, Christos; Feierstein, Claudia E; Kepecs, Adam; Mainen, Zachary F; Qi, Xue-Lian; Romo, Ranulfo; Uchida, Naoshige; Machens, Christian K
2016-01-01
Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure. DOI: http://dx.doi.org/10.7554/eLife.10989.001 PMID:27067378
Simulation of neutron production using MCNPX+MCUNED.
Erhard, M; Sauvan, P; Nolte, R
2014-10-01
In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
West, Michael; Gao, Wei; Grand, Stephen
2004-08-01
Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
A data base and analysis program for shuttle main engine dynamic pressure measurements
NASA Technical Reports Server (NTRS)
Coffin, T.
1986-01-01
A dynamic pressure data base management system is described for measurements obtained from space shuttle main engine (SSME) hot firing tests. The data were provided in terms of engine power level and rms pressure time histories, and power spectra of the dynamic pressure measurements at selected times during each test. Test measurements and engine locations are defined along with a discussion of data acquisition and reduction procedures. A description of the data base management analysis system is provided and subroutines developed for obtaining selected measurement means, variances, ranges and other statistics of interest are discussed. A summary of pressure spectra obtained at SSME rated power level is provided for reference. Application of the singular value decomposition technique to spectrum interpolation is discussed and isoplots of interpolated spectra are presented to indicate measurement trends with engine power level. Program listings of the data base management and spectrum interpolation software are given. Appendices are included to document all data base measurements.
Neutron die-away experiment for remote analysis of the surface of the moon and the planets, phase 3
NASA Technical Reports Server (NTRS)
Mills, W. R.; Allen, L. S.
1972-01-01
Continuing work on the two die-away measurements proposed to be made in the combined pulsed neutron experiment (CPNE) for analysis of lunar and planetary surfaces is described. This report documents research done during Phase 3. A general exposition of data analysis by the least-squares method and the related problem of the prediction of variance is given. A data analysis procedure for epithermal die-away data has been formulated. In order to facilitate the analysis, the number of independent material variables has been reduced to two: the hydrogen density and an effective oxygen density, the latter being determined uniquely from the nonhydrogeneous elemental composition. Justification for this reduction in the number of variables is based on a set of 27 new theoretical calculations. Work is described related to experimental calibration of the epithermal die-away measurement. An interim data analysis technique based solely on theoretical calculations seems to be adequate and will be used for future CPNE field tests.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Vardi, Yoram; Sprecher, Elliot; Gruenwald, Ilan; Yarnitsky, David; Gartman, Irena; Granovsky, Yelena
2009-06-01
There is a need for an objective technique to assess the degree of hypoactive sexual desire disorder (HSDD). Recently, we described such a methodology (event-related potential technique [ERP]) based on recording of p300 electroencephalography (EEG) waves elicited by auditory stimuli during synchronous exposure to erotic films. To compare sexual interest of sexually healthy women to females with sexual dysfunction (FSD) using ERP, and to explore whether FSD women with and without HSDD would respond differently to two different types of erotic stimuli-films containing (I) or not containing (NI) sexual intercourse scenes. Twenty-two women with FSD, of which nine had HSDD only, and 30 sexually healthy women were assessed by the Female Sexual Functioning Index. ERP methodology was performed applying erotic NI or I films. Significant differences in percent of auditory p300 amplitude reduction (PR) in response to erotic stimuli within and between all three groups for each film type. PRs to each film type were similar in sexually healthy women (60.6% +/- 40.3 (NI) and 51.7% +/- 32.3 [I]), while in women with FSD, reduction was greater when viewing the NI vs. I erotic films (71.4% +/- 41.0 vs. 37.7% +/- 45.7; P = 0.0099). This difference was mainly due to the greater PR of the subgroup with HSDD in response to NI vs. I films (77.7% +/- 46.7 vs. 17.0% +/- 50.3) than in the FSD women without HSDD group or the sexually healthy women (67.5% +/- 38.7 vs. 50.4% +/- 39.4 respectively), P = 0.0084. For comparisons, we used the mixed-model one-way analysis of variance. Differences in neurophysiological response patterns between sexually healthy vs. sexually dysfunctional females may point to a specific inverse discrimination ability for sexually relevant information in the subgroup of women with HSDD. These findings suggest that the p300 ERP technique could be used as an objective quantitative tool for libido assessment in sexually dysfunctional women.
Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo Chirici
2011-01-01
Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...
VO2 and VCO2 variabilities through indirect calorimetry instrumentation.
Cadena-Méndez, Miguel; Escalante-Ramírez, Boris; Azpiroz-Leehan, Joaquín; Infante-Vázquez, Oscar
2013-01-01
The aim of this paper is to understand how to measure the VO2 and VCO2 variabilities in indirect calorimetry (IC) since we believe they can explain the high variation in the resting energy expenditure (REE) estimation. We propose that variabilities should be separately measured from the VO2 and VCO2 averages to understand technological differences among metabolic monitors when they estimate the REE. To prove this hypothesis the mixing chamber (MC) and the breath-by-breath (BbB) techniques measured the VO2 and VCO2 averages and their variabilities. Variances and power spectrum energies in the 0-0.5 Hertz band were measured to establish technique differences in steady and non-steady state. A hybrid calorimeter with both IC techniques studied a population of 15 volunteers that underwent the clino-orthostatic maneuver in order to produce the two physiological stages. The results showed that inter-individual VO2 and VCO2 variabilities measured as variances were negligible using the MC while variabilities measured as spectral energies using the BbB underwent 71 and 56% (p < 0.05), increase respectively. Additionally, the energy analysis showed an unexpected cyclic rhythm at 0.025 Hertz only during the orthostatic stage, which is new physiological information, not reported previusly. The VO2 and VCO2 inter-individual averages increased to 63 and 39% by the MC (p < 0.05) and 32 and 40% using the BbB (p < 0.1), respectively, without noticeable statistical differences among techniques. The conclusions are: (a) metabolic monitors should simultaneously include the MC and the BbB techniques to correctly interpret the steady or non-steady state variabilities effect in the REE estimation, (b) the MC is the appropriate technique to compute averages since it behaves as a low-pass filter that minimizes variances, (c) the BbB is the ideal technique to measure the variabilities since it can work as a high-pass filter to generate discrete time series able to accomplish spectral analysis, and (d) the new physiological information in the VO2 and VCO2 variabilities can help to understand why metabolic monitors with dissimilar IC techniques give different results in the REE estimation.
Biologic plating of unstable distal radial fractures.
Kwak, Jae-Man; Jung, Gu-Hee
2018-04-14
Volar locking plating through the flexor carpi radialis is a well-established technique for treating unstable distal radial fractures, with few reported complications. In certain circumstances, including metaphyseal comminuted fractures, bridge plating through a pronator quadratus (PQ)-sparing approach may be required to preserve the soft tissue envelope. This study describes our prospective experience with bridge plating through indirect reduction. Thirty-three wrists (four 23A2, six 23A3, 15 23C1, and eight 23C2) underwent bridge plating through a PQ-sparing approach with indirect reduction from June 2006 to December 2010. Mean patient age was 56.8 years (range, 25-83 years), and the mean follow-up period was 47.5 months (range, 36-84 months). Changes in radiologic parameters (volar tilt, radial inclination, radial length, and ulnar variance) were analyzed, and functional results at final follow-up were evaluated by measuring the Modified Mayo Wrist Score (MMWS) and Modified Gartland-Werley Score (MGWS). All wrists achieved bone healing without significant complications after a single operation. At final follow-up, radial length was restored from an average of 3.7 mm to 11.0 mm, as were radial inclination, from 16.4° to 22.5°, and volar tilt, from - 9.1° to 5.5°. However, radial length was overcorrected in three wrists, and two experienced residual dorsal tilt. Excellent and good results on the MGWS were achieved in 30 wrists (90.9%). The average MMWS outcome was 92.6 (range, 75-100). Our experience with bridge plating was similar to that previously reported in the earlier publications. Compared with the conventional technique, bridge plating through a PQ-sparing approach may help in managing metaphyseal comminuted fractures of both cortices with a reduced radio-ulnar index.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
Variance fluctuations in nonstationary time series: a comparative study of music genres
NASA Astrophysics Data System (ADS)
Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.
2004-05-01
An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
Quantifying the vascular response to ischemia with speckle variance optical coherence tomography
Poole, Kristin M.; McCormack, Devin R.; Patil, Chetan A.; Duvall, Craig L.; Skala, Melissa C.
2014-01-01
Longitudinal monitoring techniques for preclinical models of vascular remodeling are critical to the development of new therapies for pathological conditions such as ischemia and cancer. In models of skeletal muscle ischemia in particular, there is a lack of quantitative, non-invasive and long term assessment of vessel morphology. Here, we have applied speckle variance optical coherence tomography (OCT) methods to quantitatively assess vascular remodeling and growth in a mouse model of peripheral arterial disease. This approach was validated on two different mouse strains known to have disparate rates and abilities of recovering following induction of hind limb ischemia. These results establish the potential for speckle variance OCT as a tool for quantitative, preclinical screening of pro- and anti-angiogenic therapies. PMID:25574425
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Unsupervised classification of earth resources data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Jayroe, R. R., Jr.; Cummings, R. E.
1972-01-01
A new clustering technique is presented. It consists of two parts: (a) a sequential statistical clustering which is essentially a sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by existing supervised maximum liklihood classification technique.
Use of high-order spectral moments in Doppler weather radar
NASA Astrophysics Data System (ADS)
di Vito, A.; Galati, G.; Veredice, A.
Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.
Effect of various putty-wash impression techniques on marginal fit of cast crowns.
Nissan, Joseph; Rosner, Ofir; Bukhari, Mohammed Amin; Ghelfan, Oded; Pilo, Raphael
2013-01-01
Marginal fit is an important clinical factor that affects restoration longevity. The accuracy of three polyvinyl siloxane putty-wash impression techniques was compared by marginal fit assessment using the nondestructive method. A stainless steel master cast containing three abutments with three metal crowns matching the three preparations was used to make 45 impressions: group A = single-step technique (putty and wash impression materials used simultaneously), group B = two-step technique with a 2-mm relief (putty as a preliminary impression to create a 2-mm wash space followed by the wash stage), and group C = two-step technique with a polyethylene spacer (plastic spacer used with the putty impression followed by the wash stage). Accuracy was assessed using a toolmaker microscope to measure and compare the marginal gaps between each crown and finish line on the duplicated stone casts. Each abutment was further measured at the mesial, buccal, and distal aspects. One-way analysis of variance was used for statistical analysis. P values and Scheffe post hoc contrasts were calculated. Significance was determined at .05. One-way analysis of variance showed significant differences among the three impression techniques in all three abutments and at all three locations (P < .001). Group B yielded dies with minimal gaps compared to groups A and C. The two-step impression technique with 2-mm relief was the most accurate regarding the crucial clinical factor of marginal fit.
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
NASA Astrophysics Data System (ADS)
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
2016-08-01
Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a largemore » amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.« less
[Plot analysis in the dark coniferous ecosystem using GPS and GIS techniques].
Guan, Wenbin; Xie, Chunhua; Wu, Jian'an; Yu, Xinxiao; Chen, Gengwei; Li, Tongyang
2002-07-01
It is generally difficult to survey in primary forests located on high-altitude region. However, it is convenient to identify and to recognize plots accompanied by GPS and GIS techniques, which can also display the spatial pattern of arbors precisely. Using the method of rapid-static positioning cooperated with tape-measure, it is concluded that except some points, the positioning was relatively precise, the average value of RMS was 2.84, variance was 2.96, and delta B, delta L, and delta H were 1.2, 1.2, and 4.3 m with their variances being +/- 0.6, +/- 1.1, and +/- 21.1, respectively, which could meet the needs of forestry management sufficiently. Accompanied by some other models, many ecological processes under small and even medium scale, such as the dynamics of gap succession, could also be simulated visually by GIS. Therefore, the techniques of "2S" were patent for forest ecosystem management under the fine scale, especially in the area of high altitude.
Comprehensive Analysis of LC/MS Data Using Pseudocolor Plots
NASA Astrophysics Data System (ADS)
Crutchfield, Christopher A.; Olson, Matthew T.; Gourgari, Evgenia; Nesterova, Maria; Stratakis, Constantine A.; Yergey, Alfred L.
2013-02-01
We have developed new applications of the pseudocolor plot for the analysis of LC/MS data. These applications include spectral averaging, analysis of variance, differential comparison of spectra, and qualitative filtering by compound class. These applications have been motivated by the need to better understand LC/MS data generated from analysis of human biofluids. The examples presented use data generated to profile steroid hormones in urine extracts from a Cushing's disease patient relative to a healthy control, but are general to any discovery-based scanning mass spectrometry technique. In addition to new visualization techniques, we introduce a new metric of variance: the relative maximum difference from the mean. We also introduce the concept of substructure-dependent analysis of steroid hormones using precursor ion scans. These new analytical techniques provide an alternative approach to traditional untargeted metabolomics workflow. We present an approach to discovery using MS that essentially eliminates alignment or preprocessing of spectra. Moreover, we demonstrate the concept that untargeted metabolomics can be achieved using low mass resolution instrumentation.
NASA Technical Reports Server (NTRS)
Schutz, Bob E.; Baker, Gregory A.
1997-01-01
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Improving Signal Detection using Allan and Theo Variances
NASA Astrophysics Data System (ADS)
Hardy, Andrew; Broering, Mark; Korsch, Wolfgang
2017-09-01
Precision measurements often deal with small signals buried within electronic noise. Extracting these signals can be enhanced through digital signal processing. Improving these techniques provide signal to noise ratios. Studies presently performed at the University of Kentucky are utilizing the electro-optic Kerr effect to understand cell charging effects within ultra-cold neutron storage cells. This work is relevant for the neutron electric dipole moment (nEDM) experiment at Oak Ridge National Laboratory. These investigations, and future investigations in general, will benefit from the illustrated improved analysis techniques. This project will showcase various methods for determining the optimum duration that data should be gathered for. Typically, extending the measuring time of an experimental run reduces the averaged noise. However, experiments also encounter drift due to fluctuations which mitigate the benefits of extended data gathering. Through comparing FFT averaging techniques, along with Allan and Theo variance measurements, quantifiable differences in signal detection will be presented. This research is supported by DOE Grants: DE-FG02-99ER411001, DE-AC05-00OR22725.
NASA Astrophysics Data System (ADS)
Baker, Gregory Allen
The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.
Multidimensional Test Assembly Based on Lagrangian Relaxation Techniques. Research Report 98-08.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.
In this paper, a mathematical programming approach is presented for the assembly of ability tests measuring multiple traits. The values of the variance functions of the estimators of the traits are minimized, while test specifications are met. The approach is based on Lagrangian relaxation techniques and provides good results for the two…
Physical heterogeneity control on effective mineral dissolution rates
NASA Astrophysics Data System (ADS)
Jung, Heewon; Navarre-Sitchler, Alexis
2018-04-01
Hydrologic heterogeneity may be an important factor contributing to the discrepancy in laboratory and field measured dissolution rates, but the governing factors influencing mineral dissolution rates among various representations of physical heterogeneity remain poorly understood. Here, we present multiple reactive transport simulations of anorthite dissolution in 2D latticed random permeability fields and link the information from local grid scale (1 cm or 4 m) dissolution rates to domain-scale (1m or 400 m) effective dissolution rates measured by the flux-weighted average of an ensemble of flow paths. We compare results of homogeneous models to heterogeneous models with different structure and layered permeability distributions within the model domain. Chemistry is simplified to a single dissolving primary mineral (anorthite) distributed homogeneously throughout the domain and a single secondary mineral (kaolinite) that is allowed to dissolve or precipitate. Results show that increasing size in correlation structure (i.e. long integral scales) and high variance in permeability distribution are two important factors inducing a reduction in effective mineral dissolution rates compared to homogeneous permeability domains. Larger correlation structures produce larger zones of low permeability where diffusion is an important transport mechanism. Due to the increased residence time under slow diffusive transport, the saturation state of a solute with respect to a reacting mineral approaches equilibrium and reduces the reaction rate. High variance in permeability distribution favorably develops large low permeability zones that intensifies the reduction in mixing and effective dissolution rate. However, the degree of reduction in effective dissolution rate observed in 1 m × 1 m domains is too small (<1% reduction from the corresponding homogeneous case) to explain several orders of magnitude reduction observed in many field studies. When multimodality in permeability distribution is approximated by high permeability variance in 400 m × 400 m domains, the reduction in effective dissolution rate increases due to the effect of long diffusion length scales through zones with very slow reaction rates. The observed scale dependence becomes complicated when pH dependent kinetics are compared to the results from pH independent rate constants. In small domains where the entire domain is reactive, faster anorthite dissolution rates and slower kaolinite precipitation rates relative to pH independent rates at far-from-equilibrium conditions reduce the effective dissolution rate by increasing the saturation state. However, in large domains where less- or non-reactive zones develop, higher kaolinite precipitation rates in less reactive zones increase the effective anorthite dissolution rates relative to the rates observed in pH independent cases.
Arveschoug, A K; Revsbech, P; Brøchner-Mortensen, J
1998-07-01
Using the determination of distal blood pressure (DBP) measured using the strain gauge technique as an example of a routine clinical physiological investigation involving many different observers (laboratory technicians), the present study was carried out to assess (1) the influence of the number of observers and the number of analyses made by each observer on the precision of a definitive value; and (2) the minimal difference between two determinations to detect a real change. A total of 45 patients participated in the study. They were all referred for DBP determination on suspicion of arterial peripheral vascular disease. In 30 of the patients, the DBP curves were read twice, with a 5-week interval, by 10 laboratory technicians. The results were analysed using the variance component model. The remaining 15 patients had their DBP determined twice on two different days with an interval of 1-3 days and the total day-to-day variation (SDdiff) of DBP was determined. The inter- and intraobserver variations were, respectively, 5.7 and 4.9 mmHg at ankle level and 3.5 and 2.7 mmHg at toe level. The index values as related to systolic pressure were somewhat lower. The mean day-to-day variation was 11 mmHg at ankle level and 10 mmHg at toe level, thereby giving a minimal significant difference between two DBP determinations of 22 mmHg at ankle and 20 mmHg at toe level. To decrease the value of SD (standard deviation) on a definitive determination of DBP and index values, it was slightly more effective if the value was based on two observers performing one independent DBP curve reading than if one observer made one or two DBP curve readings. The reduction in SDdiff was greatest at ankle level. The extent of the Sddiff decrease was greatest when two different observers made a single DBP reading each at both determinations compared with one different observer making two readings at each determination. Surprisingly, about half of the maximum reduction in the SDdiff was achieved just by increasing the number of observers from one to two. We have found variance component analyses to be a suitable method for determining intra- and interobserver variation when several different observers take part in a routine laboratory investigation. It may be applied to other laboratory methods such as renography, isotope cardiography and myocardial perfusion single-photon emission computerized tomography (SPECT) scintigraphy, in which the final result may be affected by individual judgement during processing.
NASA Astrophysics Data System (ADS)
Ramos-Méndez, José; Schuemann, Jan; Incerti, Sebastien; Paganetti, Harald; Schulte, Reinhard; Faddegon, Bruce
2017-08-01
Flagged uniform particle splitting was implemented with two methods to improve the computational efficiency of Monte Carlo track structure simulations with TOPAS-nBio by enhancing the production of secondary electrons in ionization events. In method 1 the Geant4 kernel was modified. In method 2 Geant4 was not modified. In both methods a unique flag number assigned to each new split electron was inherited by its progeny, permitting reclassification of the split events as if produced by independent histories. Computational efficiency and accuracy were evaluated for simulations of 0.5-20 MeV protons and 1-20 MeV u-1 carbon ions for three endpoints: (1) mean of the ionization cluster size distribution, (2) mean number of DNA single-strand breaks (SSBs) and double-strand breaks (DSBs) classified with DBSCAN, and (3) mean number of SSBs and DSBs classified with a geometry-based algorithm. For endpoint (1), simulation efficiency was 3 times lower when splitting electrons generated by direct ionization events of primary particles than when splitting electrons generated by the first ionization events of secondary electrons. The latter technique was selected for further investigation. The following results are for method 2, with relative efficiencies about 4.5 times lower for method 1. For endpoint (1), relative efficiency at 128 split electrons approached maximum, increasing with energy from 47.2 ± 0.2 to 66.9 ± 0.2 for protons, decreasing with energy from 51.3 ± 0.4 to 41.7 ± 0.2 for carbon. For endpoint (2), relative efficiency increased with energy, from 20.7 ± 0.1 to 50.2 ± 0.3 for protons, 15.6 ± 0.1 to 20.2 ± 0.1 for carbon. For endpoint (3) relative efficiency increased with energy, from 31.0 ± 0.2 to 58.2 ± 0.4 for protons, 23.9 ± 0.1 to 26.2 ± 0.2 for carbon. Simulation results with and without splitting agreed within 1% (2 standard deviations) for endpoints (1) and (2), within 2% (1 standard deviation) for endpoint (3). In conclusion, standard particle splitting variance reduction techniques can be successfully implemented in Monte Carlo track structure codes.
40 CFR 142.63 - Variances and exemptions from the maximum contaminant level for total coliforms.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Identification of Best Technology, Treatment Techniques or Other Means Generally...
Distal radius osteotomy with volar locking plates based on computer simulation.
Miyake, Junichi; Murase, Tsuyoshi; Moritomo, Hisao; Sugamoto, Kazuomi; Yoshikawa, Hideki
2011-06-01
Corrective osteotomy using dorsal plates and structural bone graft usually has been used for treating symptomatic distal radius malunions. However, the procedure is technically demanding and requires an extensive dorsal approach. Residual deformity is a relatively frequent complication of this technique. We evaluated the clinical applicability of a three-dimensional osteotomy using computer-aided design and manufacturing techniques with volar locking plates for distal radius malunions. Ten patients with metaphyseal radius malunions were treated. Corrective osteotomy was simulated with the help of three-dimensional bone surface models created using CT data. We simulated the most appropriate screw holes in the deformed radius using computer-aided design data of a locking plate. During surgery, using a custom-made surgical template, we predrilled the screw holes as simulated. After osteotomy, plate fixation using predrilled screw holes enabled automatic reduction of the distal radial fragment. Autogenous iliac cancellous bone was grafted after plate fixation. The median volar tilt, radial inclination, and ulnar variance improved from -20°, 13°, and 6 mm, respectively, before surgery to 12°, 24°, and 1 mm, respectively, after surgery. The median wrist flexion improved from 33° before surgery to 60° after surgery. The median wrist extension was 70° before surgery and 65° after surgery. All patients experienced wrist pain before surgery, which disappeared or decreased after surgery. Surgeons can operate precisely and easily using this advanced technique. It is a new treatment option for malunion of distal radius fractures.
Herbst, Daniel P
2014-09-01
Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient's systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26-33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique.
Herbst, Daniel P.
2014-01-01
Abstract: Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient’s systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26–33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique. PMID:26357790
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.
2016-12-01
Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.
Dan, Michael; Phillips, Alfred; Simonian, Marcus; Flannagan, Scott
2015-06-01
We provide a review of literature on reduction techniques for posterior hip dislocations and present our experience with a novel technique for the reduction of acute posterior hip dislocations in the ED, 'the rocket launcher' technique. We present our results with six patients with prosthetic posterior hip dislocation treated in our rural ED. We recorded patient demographics. The technique involves placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, in an ergonomically friendly manner for the reducer. We used Fisher's t-test for cohort analysis between reduction techniques. Of our patients, the mean age was 74 years (range 66 to 85 years). We had a 83% success rate. The one patient who the 'rocket launcher' failed in, was a hemi-arthroplasty patient who also failed all other closed techniques and needed open reduction. When compared with Allis (62% success rate), Whistler (60% success rate) and Captain Morgan (92% success rate) techniques, there was no statistically significant difference in the successfulness of the reduction techniques. There were no neurovascular or periprosthetic complications. We have described a reduction technique for posterior hip dislocations. Placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, thus mechanically and ergonomically superior to standard techniques. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-09-21
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Compression of Morbidity and Mortality: New Perspectives1
Stallard, Eric
2017-01-01
Compression of morbidity is a reduction over time in the total lifetime days of chronic disability, reflecting a balance between (1) morbidity incidence rates and (2) case-continuance rates—generated by case-fatality and case-recovery rates. Chronic disability includes limitations in activities of daily living and cognitive impairment, which can be covered by long-term care insurance. Morbidity improvement can lead to a compression of morbidity if the reductions in age-specific prevalence rates are sufficiently large to overcome the increases in lifetime disability due to concurrent mortality improvements and progressively higher disability prevalence rates with increasing age. Compression of mortality is a reduction over time in the variance of age at death. Such reductions are generally accompanied by increases in the mean age at death; otherwise, for the variances to decrease, the death rates above the mean age at death would need to increase, and this has rarely been the case. Mortality improvement is a reduction over time in the age-specific death rates and a corresponding increase in the cumulative survival probabilities and age-specific residual life expectancies. Mortality improvement does not necessarily imply concurrent compression of mortality. This paper reviews these concepts, describes how they are related, shows how they apply to changes in mortality over the past century and to changes in morbidity over the past 30 years, and discusses their implications for future changes in the United States. The major findings of the empirical analyses are the substantial slowdowns in the degree of mortality compression over the past half century and the unexpectedly large degree of morbidity compression that occurred over the morbidity/disability study period 1984–2004; evidence from other published sources suggests that morbidity compression may be continuing. PMID:28740358
A systematic comparison of the closed shoulder reduction techniques.
Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J
2017-05-01
To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.
Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.
Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong
2018-03-01
Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.
NASA Technical Reports Server (NTRS)
Hill, Emma M.; Ponte, Rui M.; Davis, James L.
2007-01-01
Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Johnson, Henry C.; Rosevear, G. Craig
1977-01-01
This study explored the relationship between traditional admissions criteria, performance in the first semester of medical school, and performance on the National Board of Medical Examiners' (NBME) Examination, Part 1 for minority medical students, non-minority medical students, and the two groups combined. Correlational analysis and step-wise multiple regression procedures were used as the analysis techniques. A different pattern of admissions variables related to National Board Part 1 performance for the two groups. The General Information section of the Medical College Admission Test (MCAT) contributed the most variance for the minority student group. MCAT-Science contributed the most variance for the non-minority student group. MCATs accounted for a substantial portion of the variance on the National Board examination. PMID:904005
Aperture averaging in strong oceanic turbulence
NASA Astrophysics Data System (ADS)
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Estimating means and variances: The comparative efficiency of composite and grab samples.
Brumelle, S; Nemetz, P; Casey, D
1984-03-01
This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... (OMB) for review, as required by the Paperwork Reduction Act. The Department is soliciting public... resultant costs also serve to further stabilize the mortgage insurance premiums charged by FHA and the... Insurance Benefits, HUD-90035 Information/Disclosure, HUD-90041 Request for Variance, Pre-foreclosure sale...
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Aromatherapy hand massage for older adults with chronic pain living in long-term care.
Cino, Kathleen
2014-12-01
Older adults living in long-term care experience high rates of chronic pain. Concerns with pharmacologic management have spurred alternative approaches. The purpose of this study was to examine a nursing intervention for older adults with chronic pain. This prospective, randomized control trial compared the effect of aromatherapy M technique hand massage, M technique without aromatherapy, and nurse presence on chronic pain. Chronic pain was measured with the Geriatric Multidimensional Pain and Illness Inventory factors, pain and suffering, life interference, and emotional distress and the Iowa Pain Thermometer, a pain intensity scale. Three groups of 39 to 40 participants recruited from seven long-term care facilities participated twice weekly for 4 weeks. Analysis included multivariate analysis of variance and analysis of variance. Participants experienced decreased levels of chronic pain intensity. Group membership had a significant effect on the Geriatric Multidimensional Pain Inventory Pain and Suffering scores; Iowa Pain Thermometer scores differed significantly within groups. M technique hand massage with or without aromatherapy significantly decreased chronic pain intensity compared to nurse presence visits. M technique hand massage is a safe, simple, but effective intervention. Caregivers using it could improve chronic pain management in this population. © The Author(s) 2014.
FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport
NASA Astrophysics Data System (ADS)
Munk, Madicken
The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.
NASA Technical Reports Server (NTRS)
Riddick, Stephen E.; Hinton, David A.
2000-01-01
A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).
The missed inferior alveolar block: a new look at an old problem.
Milles, M
1984-01-01
A variation of a previously described technique to obtain mandibular block anesthesia is presented. This technique varies from those previously described in that is uses palpable anatomic landmarks, both extra- and intraoral, to orient the placement of the needle. This technique relies on several readily observed landmarks and the integration of these landmarks. Because palpable landmarks are used, consistent results can be easily obtained even in patients who present with a wide variety of anatomical variances which otherwise make this injection technique difficult and prone to failure.
Risk factors of chronic periodontitis on healing response: a multilevel modelling analysis.
Song, J; Zhao, H; Pan, C; Li, C; Liu, J; Pan, Y
2017-09-15
Chronic periodontitis is a multifactorial polygenetic disease with an increasing number of associated factors that have been identified over recent decades. Longitudinal epidemiologic studies have demonstrated that the risk factors were related to the progression of the disease. A traditional multivariate regression model was used to find risk factors associated with chronic periodontitis. However, the approach requirement of standard statistical procedures demands individual independence. Multilevel modelling (MLM) data analysis has widely been used in recent years, regarding thorough hierarchical structuring of the data, decomposing the error terms into different levels, and providing a new analytic method and framework for solving this problem. The purpose of our study is to investigate the relationship of clinical periodontal index and the risk factors in chronic periodontitis through MLM analysis and to identify high-risk individuals in the clinical setting. Fifty-four patients with moderate to severe periodontitis were included. They were treated by means of non-surgical periodontal therapy, and then made follow-up visits regularly at 3, 6, and 12 months after therapy. Each patient answered a questionnaire survey and underwent measurement of clinical periodontal parameters. Compared with baseline, probing depth (PD) and clinical attachment loss (CAL) improved significantly after non-surgical periodontal therapy with regular follow-up visits at 3, 6, and 12 months after therapy. The null model and variance component models with no independent variables included were initially obtained to investigate the variance of the PD and CAL reductions across all three levels, and they showed a statistically significant difference (P < 0.001), thus establishing that MLM data analysis was necessary. Site-level had effects on PD and CAL reduction; those variables could explain 77-78% of PD reduction and 70-80% of CAL reduction at 3, 6, and 12 months. Other levels only explain 20-30% of PD and CAL reductions. Site-level had the greatest effect on PD and CAL reduction. Non-surgical periodontal therapy with regular follow-up visits had a remarkable curative effect. All three levels had a substantial influence on the reduction of PD and CAL. Site-level had the largest effect on PD and CAL reductions.
Improved Hybrid Modeling of Spent Fuel Storage Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bibber, Karl van
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A; Zbijewski, W; Stayman, J
Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2017-02-01
Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.
Scarneo, Samantha E; Root, Hayley J; Martinez, Jessica C; Denegar, Craig; Casa, Douglas J; Mazerolle, Stephanie M; Dann, Catie L; Aerni, Giselle A; DiStefano, Lindsay J
2017-01-01
Neuromuscular training programs (NTPs) improve landing technique and decrease vertical ground-reaction forces (VGRFs), resulting in injury-risk reduction. NTPs in an aquatic environment may elicit the same improvements as land-based programs with reduced joint stress. To examine the effects of an aquatic NTP on landing technique as measured by the Landing Error Scoring System (LESS) and VGRFs, immediately and 4 mo after the intervention. Repeated measures, pool and laboratory. Fifteen healthy, recreationally active women (age 21 ± 2 y, mass 62.02 ± 8.18 kg, height 164.74 ± 5.97 cm) who demonstrated poor landing technique (LESS-Real Time > 4). All participants completed an aquatic NTP 3 times/wk for 6 wk. Participants' landing technique was evaluated using a jump-landing task immediately before (PRE), immediately after (POST), and 4 mo after (RET) the intervention period. A single rater, blinded to time point, graded all videos using the LESS, which is a valid and reliable movement-screening tool. Peak VGRFs were measured during the stance phase of the jump-landing test. Repeated-measure analyses of variance with planned comparisons were performed to explore differences between time points. LESS scores were lower at POST (4.46 ± 1.69 errors) and at RET (4.2 ± 1.72 errors) than at PRE (6.30 ± 1.78 errors) (P < .01). No significant differences were observed between POST and RET (P > .05). Participants also landed with significantly lower peak VGRFs (P < .01) from PRE (2.69 ± .72 N) to POST (2.23 ± .66 N). The findings introduce evidence that an aquatic NTP improves landing technique and suggest that improvements are retained over time. These results show promise of using an aquatic NTP when there is a desire to reduce joint loading, such as early stages of rehabilitation, to improve biomechanics and reduce injury risk.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
Prakash, Priyanka; Kalra, Mannudeep K; Digumarthy, Subba R; Hsieh, Jiang; Pien, Homer; Singh, Sarabjeet; Gilman, Matthew D; Shepard, Jo-Anne O
2010-01-01
To assess radiation dose reduction and image quality for weight-based chest computed tomographic (CT) examination results reconstructed using adaptive statistical iterative reconstruction (ASIR) technique. With local ethical committee approval, weight-adjusted chest CT examinations were performed using ASIR in 98 patients and filtered backprojection (FBP) in 54 weight-matched patients on a 64-slice multidetector CT. Patients were categorized into 3 groups: 60 kg or less (n = 32), 61 to 90 kg (n = 77), and 91 kg or more (n = 43) for weight-based adjustment of noise indices for automatic exposure control (Auto mA; GE Healthcare, Waukesha, Wis). Remaining scan parameters were held constant at 0.984:1 pitch, 120 kilovolts (peak), 40-mm table feed per rotation, and 2.5-mm section thickness. Patients' weight, scanning parameters, and CT dose index volume were recorded. Effective doses (EDs) were estimated. Image noise was measured in the descending thoracic aorta at the level of the carina. Data were analyzed using analysis of variance. Compared with FBP, ASIR was associated with an overall mean (SD) decrease of 27.6% in ED (ASIR, 8.8 [2.3] mSv; FBP, 12.2 [2.1] mSv; P < 0.0001). With the use of ASIR, the ED values were 6.5 (1.8) mSv (28.8% decrease), 7.3 (1.6) mSv (27.3% decrease), and 12.8 (2.3) mSv (26.8% decrease) for the weight groups of 60 kg or less, 61 to 90 kg, and 91 kg or more, respectively, compared with 9.2 (2.3) mSv, 10.0 (2.0) mSv, and 17.4 (2.1) mSv with FBP (P < 0.0001). Despite dose reduction, there was less noise with ASIR (12.6 [2.9] mSv) than with FBP (16.6 [6.2] mSv; P < 0.0001). Adaptive statistical iterative reconstruction helps reduce chest CT radiation dose and improve image quality compared with the conventionally used FBP image reconstruction.
NASA Astrophysics Data System (ADS)
Razani, Marjan; Zam, Azhar; Arezza, Nico J. J.; Wang, Yan J.; Kolios, Michael C.
2016-03-01
In this study, we present a technique to image the enhanced particle displacement generated using an acoustic radiation force (ARF) excitation source. A swept-source OCT (SS-OCT) system with a center wavelength of 1310nm, a bandwidth of ~100nm, and an A-scan rate of 100 kHz (MEMS-VCSEL OCT Thorlabs) was used to detect gold nanoparticle (70nm in diameter) displacement .ARF was applied after the nanoparticles passed through a porous membrane and diffused into a collagen (6% collagen) matrix. B-mode, M-B mode, 3D and Speckle Variance (SV) images were acquired before and after the ARF beam was on. Differential OCT speckle variance images with and without the ARF were used to measure the particle displacement. The images were used to detect the microscopic enhancement of nanoparticle displacement generated by the ARF. Using this OCT imaging technique, the extravasation of particles though a porous membrane and characterization of the enhanced particle displacement in a collagen gel after using an ARF excitation was achieved.
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
A partially reflecting random walk on spheres algorithm for electrical impedance tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de
2015-12-15
In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
The utility of the cropland data layer for Forest Inventory and Analysis
Greg C. Liknes; Mark D. Nelson; Dale D. Gormanson; Mark Hansen
2009-01-01
The Forest Service, U.S. Department of Agriculture's (USDA's) Northern Research Station Forest Inventory and Analysis program (NRS-FIA) uses digital land cover products derived from remotely sensed imagery, such as the National Land Cover Dataset (NLCD), for the purpose of variance reduction via postsampling stratification. The update cycle of the NLCD...
Southwestern USA Drought over Multiple Millennia
NASA Astrophysics Data System (ADS)
Salzer, M. W.; Kipfmueller, K. F.
2014-12-01
Severe to extreme drought conditions currently exist across much of the American West. There is increasing concern that climate change may be worsening droughts in the West and particularly the Southwest. Thus, it is important to understand the role of natural variability and to place current conditions in a long-term context. We present a tree-ring derived reconstruction of regional-scale precipitation for the Southwestern USA over several millennia. A network of 48 tree-ring chronologies from California, Nevada, Utah, Arizona, New Mexico, and Colorado was used. All of the chronologies are at least 1,000 years long. The network was subjected to data reduction through PCA and a "nested" multiple linear regression reconstruction approach. The regression model was able to capture 72% of the variance in September-August precipitation over the last 1,000 years and 53% of the variance over the first millennium of the Common Era. Variance captured and spatial coverage further declined back in time as the shorter chronologies dropped out of the model, eventually reaching 24% of variance captured at 3250 BC. Results show regional droughts on decadal- to multi-decadal scales have been prominent and persistent phenomena in the region over the last several millennia. Anthropogenic warming is likely to exacerbate the effects of future droughts on human and other biotic populations.
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
Direct simulation of compressible turbulence in a shear flow
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
The purpose of this study is to investigate compressibility effects on the turbulence in homogeneous shear flow. It is found that the growth of the turbulent kinetic energy decreases with increasing Mach number, a phenomenon similar to the reduction of turbulent velocity intensities observed in experiments on supersonic free shear layers. An examination of the turbulent energy budget shows that both the compressible dissipation and the pressure-dilatation contribute to the decrease in the growth of kinetic energy. The pressure-dilatation is predominantly negative in homogeneous shear flow, in contrast to its predominantly positive behavior in isotropic turbulence. The different signs of the pressure-dilatation are explained by theoretical consideration of the equations for the pressure variance and density variance.
The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction
NASA Astrophysics Data System (ADS)
Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.
2017-04-01
Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
An improved technique for the 2H/1H analysis of urines from diabetic volunteers
Coplen, T.B.; Harper, I.T.
1994-01-01
The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, ~ 1-2???, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, approximately 1-2%, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.
A multispecies tree ring reconstruction of Potomac River streamflow (950-2001)
NASA Astrophysics Data System (ADS)
Maxwell, R. Stockton; Hessl, Amy E.; Cook, Edward R.; Pederson, Neil
2011-05-01
Mean May-September Potomac River streamflow was reconstructed from 950-2001 using a network of tree ring chronologies (n = 27) representing multiple species. We chose a nested principal components reconstruction method to maximize use of available chronologies backward in time. Explained variance during the period of calibration ranged from 20% to 53% depending on the number and species of chronologies available in each 25 year time step. The model was verified by two goodness of fit tests, the coefficient of efficiency (CE) and the reduction of error statistic (RE). The RE and CE never fell below zero, suggesting the model had explanatory power over the entire period of reconstruction. Beta weights indicated a loss of explained variance during the 1550-1700 period that we hypothesize was caused by the reduction in total number of predictor chronologies and loss of important predictor species. Thus, the reconstruction is strongest from 1700-2001. Frequency, intensity, and duration of drought and pluvial events were examined to aid water resource managers. We found that the instrumental period did not represent adequately the full range of annual to multidecadal variability present in the reconstruction. Our reconstruction of mean May-September Potomac River streamflow was a significant improvement over the Cook and Jacoby (1983) reconstruction because it expanded the seasonal window, lengthened the record by 780 years, and better replicated the mean and variance of the instrumental record. By capitalizing on variable phenologies and tree growth responses to climate, multispecies reconstructions may provide significantly more information about past hydroclimate, especially in regions with low aridity and high tree species diversity.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
Wright, George W; Simon, Richard M
2003-12-12
Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf
Harkness, Mark; Fisher, Angela; Lee, Michael D; Mack, E Erin; Payne, Jo Ann; Dworatzek, Sandra; Roberts, Jeff; Acheson, Carolyn; Herrmann, Ronald; Possolo, Antonio
2012-04-01
A large, multi-laboratory microcosm study was performed to select amendments for supporting reductive dechlorination of high levels of trichloroethylene (TCE) found at an industrial site in the United Kingdom (UK) containing dense non-aqueous phase liquid (DNAPL) TCE. The study was designed as a fractional factorial experiment involving 177 bottles distributed between four industrial laboratories and was used to assess the impact of six electron donors, bioaugmentation, addition of supplemental nutrients, and two TCE levels (0.57 and 1.90 mM or 75 and 250 mg/L in the aqueous phase) on TCE dechlorination. Performance was assessed based on the concentration changes of TCE and reductive dechlorination degradation products. The chemical data was evaluated using analysis of variance (ANOVA) and survival analysis techniques to determine both main effects and important interactions for all the experimental variables during the 203-day study. The statistically based design and analysis provided powerful tools that aided decision-making for field application of this technology. The analysis showed that emulsified vegetable oil (EVO), lactate, and methanol were the most effective electron donors, promoting rapid and complete dechlorination of TCE to ethene. Bioaugmentation and nutrient addition also had a statistically significant positive impact on TCE dechlorination. In addition, the microbial community was measured using phospholipid fatty acid analysis (PLFA) for quantification of total biomass and characterization of the community structure and quantitative polymerase chain reaction (qPCR) for enumeration of Dehalococcoides organisms (Dhc) and the vinyl chloride reductase (vcrA) gene. The highest increase in levels of total biomass and Dhc was observed in the EVO microcosms, which correlated well with the dechlorination results. Copyright © 2012 Elsevier B.V. All rights reserved.
García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M
2010-07-01
In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Final Design for a Comprehensive Orbital Debris Management Program
NASA Technical Reports Server (NTRS)
1990-01-01
The rationale and specifics for the design of a comprehensive program for the control of orbital debris, as well as details of the various components of the overall plan, are described. The problem of orbital debris has been steadily worsening since the first successful launch in 1957. The hazards posed by orbital debris suggest the need for a progressive plan for the prevention of future debris, as well as the reduction of the current debris level. The proposed debris management plan includes debris removal systems and preventative techniques and policies. The debris removal is directed at improving the current debris environment. Because of the variance in sizes of debris, a single system cannot reasonably remove all kinds of debris. An active removal system, which deliberately retrieves targeted debris from known orbits, was determined to be effective in the disposal of debris tracked directly from earth. However, no effective system is currently available to remove the untrackable debris. The debris program is intended to protect the orbital environment from future abuses. This portion of the plan involves various environment from future abuses. This portion of the plan involves various methods and rules for future prevention of debris. The preventative techniques are protective methods that can be used in future design of payloads. The prevention policies are rules which should be employed to force the prevention of orbital debris.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
Portfolio Decisions and Brain Reactions via the CEAD method.
Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K
2016-09-01
Decision making can be a complex process requiring the integration of several attributes of choice options. Understanding the neural processes underlying (uncertain) investment decisions is an important topic in neuroeconomics. We analyzed functional magnetic resonance imaging (fMRI) data from an investment decision study for stimulus-related effects. We propose a new technique for identifying activated brain regions: cluster, estimation, activation, and decision method. Our analysis is focused on clusters of voxels rather than voxel units. Thus, we achieve a higher signal-to-noise ratio within the unit tested and a smaller number of hypothesis tests compared with the often used General Linear Model (GLM). We propose to first conduct the brain parcellation by applying spatially constrained spectral clustering. The information within each cluster can then be extracted by the flexible dynamic semiparametric factor model (DSFM) dimension reduction technique and finally be tested for differences in activation between conditions. This sequence of Cluster, Estimation, Activation, and Decision admits a model-free analysis of the local fMRI signal. Applying a GLM on the DSFM-based time series resulted in a significant correlation between the risk of choice options and changes in fMRI signal in the anterior insula and dorsomedial prefrontal cortex. Additionally, individual differences in decision-related reactions within the DSFM time series predicted individual differences in risk attitudes as modeled with the framework of the mean-variance model.
Chounchaisithi, Napa; Santiwong, Busayarat; Sutthavong, Sirikarn; Asvanit, Pompun
2014-02-01
Disclosing agents have a long history of use as an aid in children's tooth brushing instruction. However, their benefit when used to improve self-performed tooth brushing ability without any tooth brushing instruction has not been investigated. To evaluate the effect of disclosed plaque visualization on improving the self-performed, tooth brushing ability of primary school children. A cluster-randomized, crossover study was conducted in Nakhon Nayok province, Thailand. A total of 122 second-grade schoolchildren, aged 8-10 years old, from 12 schools were randomly divided into 2 groups. The first group was assigned to brush with disclosed plaque visualization, while the other group brushed without disclosed plaque visualization. One month later the groups switched procedures. Tooth brushing ability was evaluated by the subjects' reduction in patient hygiene performance (PHP) scores. The data were analyzed using repeated-measures analysis of variance, with significance set at p<0.05. Disclosed plaque visualization had a significant effect on improving the children's self-performed, tooth brushing ability in all areas of the mouth (p<0.001), particularly for anterior teeth, mandibular teeth, buccal surfaces, and areas adjacent to the gingival margin (p<0.001). Disclosed plaque visualization is a viable technique to improve children's self-performed tooth brushing ability, and could be used in school-based oral health promotion programs.
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
ERIC Educational Resources Information Center
Games, Paul A.
1975-01-01
A brief introduction is presented on how multiple regression and linear model techniques can handle data analysis situations that most educators and psychologists think of as appropriate for analysis of variance. (Author/BJG)
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
[Exploration of influencing factors of price of herbal based on VAR model].
Wang, Nuo; Liu, Shu-Zhen; Yang, Guang
2014-10-01
Based on vector auto-regression (VAR) model, this paper takes advantage of Granger causality test, variance decomposition and impulse response analysis techniques to carry out a comprehensive study of the factors influencing the price of Chinese herbal, including herbal cultivation costs, acreage, natural disasters, the residents' needs and inflation. The study found that there is Granger causality relationship between inflation and herbal prices, cultivation costs and herbal prices. And in the total variance analysis of Chinese herbal and medicine price index, the largest contribution to it is from its own fluctuations, followed by the cultivation costs and inflation.
Levecke, Bruno; Anderson, Roy M; Berkvens, Dirk; Charlier, Johannes; Devleesschauwer, Brecht; Speybroeck, Niko; Vercruysse, Jozef; Van Aelst, Stefan
2015-03-01
In the present study, we present a hierarchical model based on faecal egg counts (FECs; expressed in eggs per 1g of stool) in which we first describe the variation in FECs between individuals in a particular population, followed by describing the variance due to counting eggs under a microscope separately for each stool sample. From this general framework, we discuss how to calculate a sample size for assessing a population mean FEC and the impact of an intervention, measured as reduction in FECs, for any scenario of soil-transmitted helminth (STH) epidemiology (the intensity and aggregation of FECs within a population) and diagnostic strategy (amount of stool examined (∼sensitivity of the diagnostic technique) and examination of individual/pooled stool samples) and on how to estimate prevalence of STH in the absence of a gold standard. To give these applications the most wide relevance as possible, we illustrate each of them with hypothetical examples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H
2005-01-01
Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
Frederick, Blaise deB; Nickerson, Lisa D; Tong, Yunjie
2012-04-15
Confounding noise in BOLD fMRI data arises primarily from fluctuations in blood flow and oxygenation due to cardiac and respiratory effects, spontaneous low frequency oscillations (LFO) in arterial pressure, and non-task related neural activity. Cardiac noise is particularly problematic, as the low sampling frequency of BOLD fMRI ensures that these effects are aliased in recorded data. Various methods have been proposed to estimate the noise signal through measurement and transformation of the cardiac and respiratory waveforms (e.g. RETROICOR and respiration volume per time (RVT)) and model-free estimation of noise variance through examination of spatial and temporal patterns. We have previously demonstrated that by applying a voxel-specific time delay to concurrently acquired near infrared spectroscopy (NIRS) data, we can generate regressors that reflect systemic blood flow and oxygenation fluctuations effects. Here, we apply this method to the task of removing physiological noise from BOLD data. We compare the efficacy of noise removal using various sets of noise regressors generated from NIRS data, and also compare the noise removal to RETROICOR+RVT. We compare the results of resting state analyses using the original and noise filtered data, and we evaluate the bias for the different noise filtration methods by computing null distributions from the resting data and comparing them with the expected theoretical distributions. Using the best set of processing choices, six NIRS-generated regressors with voxel-specific time delays explain a median of 10.5% of the variance throughout the brain, with the highest reductions being seen in gray matter. By comparison, the nine RETROICOR+RVT regressors together explain a median of 6.8% of the variance in the BOLD data. Detection of resting state networks was enhanced with NIRS denoising, and there were no appreciable differences in the bias of the different techniques. Physiological noise regressors generated using Regressor Interpolation at Progressive Time Delays (RIPTiDe) offer an effective method for efficiently removing hemodynamic noise from BOLD data. Copyright © 2012 Elsevier Inc. All rights reserved.
Visentin, G; Penasa, M; Gottardo, P; Cassandro, M; De Marchi, M
2016-10-01
Milk minerals and coagulation properties are important for both consumers and processors, and they can aid in increasing milk added value. However, large-scale monitoring of these traits is hampered by expensive and time-consuming reference analyses. The objective of the present study was to develop prediction models for major mineral contents (Ca, K, Mg, Na, and P) and milk coagulation properties (MCP: rennet coagulation time, curd-firming time, and curd firmness) using mid-infrared spectroscopy. Individual milk samples (n=923) of Holstein-Friesian, Brown Swiss, Alpine Grey, and Simmental cows were collected from single-breed herds between January and December 2014. Reference analysis for the determination of both mineral contents and MCP was undertaken with standardized methods. For each milk sample, the mid-infrared spectrum in the range from 900 to 5,000cm(-1) was stored. Prediction models were calibrated using partial least squares regression coupled with a wavenumber selection technique called uninformative variable elimination, to improve model accuracy, and validated both internally and externally. The average reduction of wavenumbers used in partial least squares regression was 80%, which was accompanied by an average increment of 20% of the explained variance in external validation. The proportion of explained variance in external validation was about 70% for P, K, Ca, and Mg, and it was lower (40%) for Na. Milk coagulation properties prediction models explained between 54% (rennet coagulation time) and 56% (curd-firming time) of the total variance in external validation. The ratio of standard deviation of each trait to the respective root mean square error of prediction, which is an indicator of the predictive ability of an equation, suggested that the developed models might be effective for screening and collection of milk minerals and coagulation properties at the population level. Although prediction equations were not accurate enough to be proposed for analytic purposes, mid-infrared spectroscopy predictions could be evaluated as phenotypic information to genetically improve milk minerals and MCP on a large scale. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Robust Derivation of Risk Reduction Strategies
NASA Technical Reports Server (NTRS)
Richardson, Julian; Port, Daniel; Feather, Martin
2007-01-01
Effective risk reduction strategies can be derived mechanically given sufficient characterization of the risks present in the system and the effectiveness of available risk reduction techniques. In this paper, we address an important question: can we reliably expect mechanically derived risk reduction strategies to be better than fixed or hand-selected risk reduction strategies, given that the quantitative assessment of risks and risk reduction techniques upon which mechanical derivation is based is difficult and likely to be inaccurate? We consider this question relative to two methods for deriving effective risk reduction strategies: the strategic method defined by Kazman, Port et al [Port et al, 2005], and the Defect Detection and Prevention (DDP) tool [Feather & Cornford, 2003]. We performed a number of sensitivity experiments to evaluate how inaccurate knowledge of risk and risk reduction techniques affect the performance of the strategies computed by the Strategic Method compared to a variety of alternative strategies. The experimental results indicate that strategies computed by the Strategic Method were significantly more effective than the alternative risk reduction strategies, even when knowledge of risk and risk reduction techniques was very inaccurate. The robustness of the Strategic Method suggests that its use should be considered in a wide range of projects.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Ozone data and mission sampling analysis
NASA Technical Reports Server (NTRS)
Robbins, J. L.
1980-01-01
A methodology was developed to analyze discrete data obtained from the global distribution of ozone. Statistical analysis techniques were applied to describe the distribution of data variance in terms of empirical orthogonal functions and components of spherical harmonic models. The effects of uneven data distribution and missing data were considered. Data fill based on the autocorrelation structure of the data is described. Computer coding of the analysis techniques is included.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, T.E.; Mihalczo, J.T.
1995-12-31
This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.; Henson, Robert
2012-01-01
A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO
GOEL, SHARAD; SALGANIK, MATTHEW J.
2013-01-01
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381
Respondent-driven sampling as Markov chain Monte Carlo.
Goel, Sharad; Salganik, Matthew J
2009-07-30
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Casemix classification payment for sub-acute and non-acute inpatient care, Thailand.
Khiaocharoen, Orathai; Pannarunothai, Supasit; Zungsontiporn, Chairoj; Riewpaiboon, Wachara
2010-07-01
There is a need to develop other casemix classifications, apart from DRG for sub-acute and non-acute inpatient care payment mechanism in Thailand. To develop a casemix classification for sub-acute and non-acute inpatient service. The study began with developing a classification system, analyzing cost, assigning payment weights, and ended with testing the validity of this new casemix system. Coefficient of variation, reduction in variance, linear regression, and split-half cross-validation were employed. The casemix for sub-acute and non-acute inpatient services contained 98 groups. Two percent of them had a coefficient of variation of the cost of higher than 1.5. The reduction in variance of cost after the classification was 32%. Two classification variables (physical function and the rehabilitation impairment categories) were key determinants of the cost (adjusted R2 = 0.749, p = .001). Validity results of split-half cross-validation of sub-acute and non-acute inpatient service were high. The present study indicated that the casemix for sub-acute and non-acute inpatient services closely predicted the hospital resource use and should be further developed for payment of the inpatients sub-acute and non-acute phase.
Shi, Hong-Fei; Xiong, Jin; Chen, Yi-Xin; Wang, Jun-Fei; Qiu, Xu-Sheng; Huang, Jie; Gui, Xue-Yang; Wen, Si-Yuan; Wang, Yin-He
2017-03-14
The optimal method for the reduction and fixation of posterior malleolar fracture (PMF) remains inconclusive. Currently, both of the indirect and direct reduction techniques are widely used. We aimed to compare the reduction quality and clinical outcome of posterior malleolar fracture managed with the direct reduction technique through posterolateral approach or the indirect reduction technique using ligamentotaxis. Patients with a PMF involving over 25% of the articular surface were recruited and assigned to the direct reduction (DR) group or the indirect reduction (IR) group. Following reduction and fixation of the fracture, the quality of fracture reduction was evaluated in post-operative CT images. Clinical and radiological follow-ups were performed at 6 weeks, 3 months, 6 months, 12 months, and then at 6 month-intervals postoperatively. Functional outcome (AOFAS score), ankle range of motion, and Visual Analog Scale (VAS) were evaluated at the last follow-up. Statistical differences were compared between the DR and IR groups considering the patient demographics, quality of fracture reduction, AOFAS score, and VAS. Totally 116 patients were included, wherein 64 cases were assigned to the DR group and 52 cases were assigned to the IR group. The quality of fracture reduction was significant higher in the DR group (P = 0.038). In the patients who completed a minimum of 12 months' follow-up, a median AOFAS score of 87 was recorded in the DR group, which was significantly higher than that recorded in the IR group (a median score of 80). The ankle range of motion was slightly better in the DR group, with the mean dorsiflexion restriction recorded to be 5.2° and 6.1° in the DR and IR group respectively (P = 0.331). Similar VAS score was observed in the two groups (P = 0.419). The direct reduction technique through a posterolateral approach provide better quality of fracture reduction and functional outcome in the management of PMF over 25% of articular surface, as compared with the indirect reduction technique using ligamentotaxis. NCT02801474 (retrospectively registered, June 2016, ClinicalTrails.gov).
Holocene constraints on simulated tropical Pacific climate
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K. M.; Carre, M.; Braconnot, P.; Leloup, J.; Zhou, Y.; Harrison, S. P.; Correge, T.; Mcgregor, H. V.; Collins, M.; Driscoll, R.; Elliot, M.; Schneider, B.; Tudhope, A. W.
2015-12-01
The El Niño-Southern Oscillation (ENSO) influences climate and weather worldwide, so uncertainties in its response to external forcings contribute to the spread in global climate projections. Theoretical and modeling studies have argued that such forcings may affect ENSO either via the seasonal cycle, the mean state, or extratropical influences, but these mechanisms are poorly constrained by the short instrumental record. Here we synthesize a pan-Pacific network of high-resolution marine biocarbonates spanning discrete snapshots of the Holocene (past 10, 000 years of Earth's history), which we use to constrain a set of global climate model (GCM) simulations via a forward model and a consistent treatment of uncertainty. Observations suggest important reductions in ENSO variability throughout the interval, most consistently during 3-5 kyBP, when approximately 2/3 reductions are inferred. The magnitude and timing of these ENSO variance reductions bear little resemblance to those sim- ulated by GCMs, or to equatorial insolation. The central Pacific witnessed a mid-Holocene increase in seasonality, at odds with the reductions simulated by GCMs. Finally, while GCM aggregate behavior shows a clear inverse relationship between seasonal amplitude and ENSO-band variance in sea-surface temperature, in agreement with many previous studies, such a relationship is not borne out by these observations. Our synthesis suggests that tropical Pacific climate is highly variable, but exhibited millennia-long periods of reduced ENSO variability whose origins, whether forced or unforced, contradict existing explanations. It also points to deficiencies in the ability of current GCMs to simulate forced changes in the tropical Pacific seasonal cycle and its interaction with ENSO, highlighting a key area of growth for future modeling efforts.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu
2016-12-15
Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of tomotherapy MVCT image enhancement program for tumor volume delineation
Martin, Spencer; Rodrigues, George; Chen, Quan; Pavamani, Simon; Read, Nancy; Ahmad, Belal; Hammond, J. Alex; Venkatesan, Varagur; Renaud, James
2011-01-01
The aims of this study were to investigate the variability between physicians in delineation of head and neck tumors on original tomotherapy megavoltage CT (MVCT) studies and corresponding software enhanced MVCT images, and to establish an optimal approach for evaluation of image improvement. Five physicians contoured the gross tumor volume (GTV) for three head and neck cancer patients on 34 original and enhanced MVCT studies. Variation between original and enhanced MVCT studies was quantified by DICE coefficient and the coefficient of variance. Based on volume of agreement between physicians, higher correlation in terms of average DICE coefficients was observed in GTV delineation for enhanced MVCT for patients 1, 2, and 3 by 15%, 3%, and 7%, respectively, while delineation variance among physicians was reduced using enhanced MVCT for 12 of 17 weekly image studies. Enhanced MVCT provides advantages in reduction of variance among physicians in delineation of the GTV. Agreement on contouring by the same physician on both original and enhanced MVCT was equally high. PACS numbers: 87.57.N‐, 87.57.np, 87.57.nt
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
Soheili, Mozhgan; Nazari, Fatemeh; Shaygannejad, Vahid; Valiani, Mahboobeh
2017-01-01
Background: Multiple sclerosis (MS) occurs with a variety of physical and psychological symptoms, yet there is not a conclusive cure for this disease. Complementary medicine is a current treatment which seems is effective in relieving symptoms of patients with MS. Therefore, this study is aimed to determine and compare the effects of reflexology and relaxation on anxiety, stress, and depression in women with MS. Subjects and Methods: This study is a randomized clinical trial that is done on 75 women with MS referred to MS Clinic of Kashani Hospital. After simple non random sampling, participants were randomly assigned by minimization method to three groups: reflexology, relaxation and control (25 patients in each group). In the experimental groups were performed reflexology and relaxation interventions within 4 weeks, twice a week for 40 min and the control group were received only routine treatment as directed by a doctor. Data were collected through depression anxiety and stress scale questionnaire, before, immediately after and 2 months after interventions in all three groups. Chi-square, Kruskal–Wallis, repeated measures analysis of variance and one-way analysis of variance and least significant difference post hoc test via SPSS version 18 were used to analyze the data (P < 0.05) was considered as significant level. Results: The results showed a significant reduction in the severity of anxiety, stress and depression during the different times in the reflexology and relaxation groups as compared with the control group (P < 0.05). Conclusion: The results showed that reflexology and relaxation in relieving anxiety, stress and depression are effective in women with MS. Hence, these two methods, as effective techniques, can be recommended. PMID:28546976
Soheili, Mozhgan; Nazari, Fatemeh; Shaygannejad, Vahid; Valiani, Mahboobeh
2017-01-01
Multiple sclerosis (MS) occurs with a variety of physical and psychological symptoms, yet there is not a conclusive cure for this disease. Complementary medicine is a current treatment which seems is effective in relieving symptoms of patients with MS. Therefore, this study is aimed to determine and compare the effects of reflexology and relaxation on anxiety, stress, and depression in women with MS. This study is a randomized clinical trial that is done on 75 women with MS referred to MS Clinic of Kashani Hospital. After simple non random sampling, participants were randomly assigned by minimization method to three groups: reflexology, relaxation and control (25 patients in each group). In the experimental groups were performed reflexology and relaxation interventions within 4 weeks, twice a week for 40 min and the control group were received only routine treatment as directed by a doctor. Data were collected through depression anxiety and stress scale questionnaire, before, immediately after and 2 months after interventions in all three groups. Chi-square, Kruskal-Wallis, repeated measures analysis of variance and one-way analysis of variance and least significant difference post hoc test via SPSS version 18 were used to analyze the data ( P < 0.05) was considered as significant level. The results showed a significant reduction in the severity of anxiety, stress and depression during the different times in the reflexology and relaxation groups as compared with the control group ( P < 0.05). The results showed that reflexology and relaxation in relieving anxiety, stress and depression are effective in women with MS. Hence, these two methods, as effective techniques, can be recommended.
NASA Astrophysics Data System (ADS)
Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto
2017-04-01
We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
NASA Astrophysics Data System (ADS)
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
Lee, Jounghee; Park, Sohyun
2016-04-01
The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods.
Ivezić, Slađana Štrkalj; Sesar, Marijan Alfonso; Mužinić, Lana
2017-03-01
Self-stigma adversely affects recovery from schizophrenia. Analyses of self stigma reduction programs discovered that few studies have investigated the impact of education about the illness on self-stigma reduction. The objective of this study was to determine whether psychoeducation based on the principles of recovery and empowerment using therapeutic group factors assists in reduction of self-stigma, increased empowerment and reduced perception of discrimination in patients with schizophrenia. 40 patients participated in psychoeducation group program and were compared with a control group of 40 patients placed on the waiting list for the same program. A Solomon four group design was used to control the influence of the pretest. Rating scales were used to measure internalized stigma, empowerment and perception of discrimination. Two-way analysis of variance was used to determine the main effects and interaction between the treatment and pretest. Simple analysis of variance with repeated measures was used to additionally test effect of treatment onself-stigma, empowerment and perceived discrimination. The participants in the psychoeducation group had lower scores on internalized stigma (F(1,76)=8.18; p<0.01) than the patients treated as usual. Analysis also confirmed the same effect with comparing experimental group before and after psychoeducation (F(1,19)=5.52; p<0.05). All participants showed a positive trend for empowerment. Psychoeducation did not influence perception of discrimination. Group psychoeducation decreased the level of self stigma. This intervention can assist in recovery from schizophrenia.
Exploring the CAESAR database using dimensionality reduction techniques
NASA Astrophysics Data System (ADS)
Mendoza-Schrock, Olga; Raymer, Michael L.
2012-06-01
The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.
Grabner, Günther; Kiesel, Barbara; Wöhrer, Adelheid; Millesi, Matthias; Wurzer, Aygül; Göd, Sabine; Mallouhi, Ammar; Knosp, Engelbert; Marosi, Christine; Trattnig, Siegfried; Wolfsberger, Stefan; Preusser, Matthias; Widhalm, Georg
2017-04-01
To investigate the value of local image variance (LIV) as a new technique for quantification of hypointense microvascular susceptibility-weighted imaging (SWI) structures at 7 Tesla for preoperative glioma characterization. Adult patients with neuroradiologically suspected diffusely infiltrating gliomas were prospectively recruited and 7 Tesla SWI was performed in addition to standard imaging. After tumour segmentation, quantification of intratumoural SWI hypointensities was conducted by the SWI-LIV technique. Following surgery, the histopathological tumour grade and isocitrate dehydrogenase 1 (IDH1)-R132H mutational status was determined and SWI-LIV values were compared between low-grade gliomas (LGG) and high-grade gliomas (HGG), IDH1-R132H negative and positive tumours, as well as gliomas with significant and non-significant contrast-enhancement (CE) on MRI. In 30 patients, 9 LGG and 21 HGG were diagnosed. The calculation of SWI-LIV values was feasible in all tumours. Significantly higher mean SWI-LIV values were found in HGG compared to LGG (92.7 versus 30.8; p < 0.0001), IDH1-R132H negative compared to IDH1-R132H positive gliomas (109.9 versus 38.3; p < 0.0001) and tumours with significant CE compared to non-significant CE (120.1 versus 39.0; p < 0.0001). Our data indicate that 7 Tesla SWI-LIV might improve preoperative characterization of diffusely infiltrating gliomas and thus optimize patient management by quantification of hypointense microvascular structures. • 7 Tesla local image variance helps to quantify hypointense susceptibility-weighted imaging structures. • SWI-LIV is significantly increased in high-grade and IDH1-R132H negative gliomas. • SWI-LIV is a promising technique for improved preoperative glioma characterization. • Preoperative management of diffusely infiltrating gliomas will be optimized.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
Allen, Sean T; Ruiz, Monica S; Roess, Amira; Jones, Jeff
2015-10-12
Prior research has examined access to syringe exchange program (SEP) services among persons who inject drugs (PWID), but no research has been conducted to evaluate variations in SEP access based on season. This is an important gap in the literature given that seasonal weather patterns and inclement weather may affect SEP service utilization. The purpose of this research is to examine differences in access to SEPs by season among PWID in the District of Columbia (DC). A geometric point distance estimation technique was applied to records from a DC SEP that operated from 1996 to 2011. We calculated the walking distance (via sidewalks) from the centroid point of zip code of home residence to the exchange site where PWID presented for services. Analysis of variance (ANOVA) was used to examine differences in walking distance measures by season. Differences in mean walking distance measures were statistically significant between winter and spring with PWID traveling approximately 2.88 and 2.77 miles, respectively, to access the SEP during these seasons. The results of this study suggest that seasonal differences in SEP accessibility may exist between winter and spring. PWID may benefit from harm reduction providers adapting their SEP operations to provide a greater diversity of exchange locations during seasons in which inclement weather may negatively influence engagement with SEPs. Increasing the number of exchange locations based on season may help resolve unmet needs among injectors.
Design and grayscale fabrication of beamfanners in a silicon substrate
NASA Astrophysics Data System (ADS)
Ellis, Arthur Cecil
2001-11-01
This dissertation addresses important first steps in the development of a grayscale fabrication process for multiple phase diffractive optical elements (DOS's) in silicon. Specifically, this process was developed through the design, fabrication, and testing of 1-2 and 1-4 beamfanner arrays for 5-micron illumination. The 1-2 beamfanner arrays serve as a test-of- concept and basic developmental step toward the construction of the 1-4 beamfanners. The beamfanners are 50 microns wide, and have features with dimensions of between 2 and 10 microns. The Iterative Annular Spectrum Approach (IASA) method, developed by Steve Mellin of UAH, and the Boundary Element Method (BEM) are the design and testing tools used to create the beamfanner profiles and predict their performance. Fabrication of the beamfanners required the techniques of grayscale photolithography and reactive ion etching (RIE). A 2-3micron feature size 1-4 silicon beamfanner array was fabricated, but the small features and contact photolithographic techniques available prevented its construction to specifications. A second and more successful attempt was made in which both 1-4 and 1-2 beamfanner arrays were fabricated with a 5-micron minimum feature size. Photolithography for the UAH array was contracted to MEMS-Optical of Huntsville, Alabama. A repeatability study was performed, using statistical techniques, of 14 photoresist arrays and the subsequent RIE process used to etch the arrays in silicon. The variance in selectivity between the 14 processes was far greater than the variance between the individual etched features within each process. Specifically, the ratio of the variance of the selectivities averaged over each of the 14 etch processes to the variance of individual feature selectivities within the processes yielded a significance level below 0.1% by F-test, indicating that good etch-to-etch process repeatability was not attained. One of the 14 arrays had feature etch-depths close enough to design specifications for optical testing, but 5- micron IR illumination of the 1-4 and 1-2 beamfanners yielded no convincing results of beam splitting in the detector plane 340 microns from the surface of the beamfanner array.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
Autologous fat graft as treatment of post short stature surgical correction scars.
Maione, Luca; Memeo, Antonio; Pedretti, Leopoldo; Verdoni, Fabio; Lisa, Andrea; Bandi, Valeria; Giannasi, Silvia; Vinci, Valeriano; Mambretti, Andrea; Klinger, Marco
2014-12-01
Surgical limb lengthening is undertaken to correct pathological short stature. Among the possible complications related to this procedure, painful and retractile scars are a cause for both functional and cosmetic concern. Our team has already shown the efficacy of autologous fat grafting in the treatment of scars with varying aetiology, so we decided to apply this technique to scars related to surgical correction of dwarfism. A prospective study was conducted to evaluate the efficacy of autologous fat grafting in the treatment of post-surgical scars in patients with short-limb dwarfism using durometer measurements and a modified patient and observer scar assessment scale (POSAS), to which was added a parameter to evaluate movement impairment. Between January 2009 and September 2012, 36 children (28 female and 8 male) who presented retractile and painful post-surgical scars came to our unit and were treated with autologous fat grafting. Preoperative and postoperative mean durometer measurements were analysed using the analysis of variance (ANOVA) test and POSAS parameters were studied using the Wilcoxon rank sum test. There was a statistically significant reduction in all durometer measurements (p-value <0.05) and in all but one of the POSAS parameters (p-value <0.05) following treatment with autologous fat grafting. Surgical procedures to camouflage scars on lower limbs are not often used as a first approach and non-surgical treatments often lead to unsatisfactory results. In contrast, our autologous fat grafting technique in the treatment of post-surgical scars has been shown to be a valuable option in patients with short-limb dwarfism. There was a reduction of skin hardness and a clinical improvement of all POSAS parameters in all patients treated. Moreover, the newly introduced POSAS parameter appears to be reliable and we recommend that it is included to give a more complete evaluation of patient perception. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ismail, Azimah; Toriman, Mohd Ekhwan; Juahir, Hafizan; Zain, Sharifuddin Md; Habir, Nur Liyana Abdul; Retnam, Ananthy; Kamaruddin, Mohd Khairul Amri; Umar, Roslan; Azid, Azman
2016-05-15
This study presents the determination of the spatial variation and source identification of heavy metal pollution in surface water along the Straits of Malacca using several chemometric techniques. Clustering and discrimination of heavy metal compounds in surface water into two groups (northern and southern regions) are observed according to level of concentrations via the application of chemometric techniques. Principal component analysis (PCA) demonstrates that Cu and Cr dominate the source apportionment in northern region with a total variance of 57.62% and is identified with mining and shipping activities. These are the major contamination contributors in the Straits. Land-based pollution originating from vehicular emission with a total variance of 59.43% is attributed to the high level of Pb concentration in the southern region. The results revealed that one state representing each cluster (northern and southern regions) is significant as the main location for investigating heavy metal concentration in the Straits of Malacca which would save monitoring cost and time. The monitoring of spatial variation and source of heavy metals pollution at the northern and southern regions of the Straits of Malacca, Malaysia, using chemometric analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Missed Inferior Alveolar Block: A New Look at an Old Problem
Milles, Maano
1984-01-01
A variation of a previously described technique to obtain mandibular block anesthesia is presented. This technique varies from those previously described in that is uses palpable anatomic landmarks, both extra- and intraoral, to orient the placement of the needle. This technique relies on several readily observed landmarks and the integration of these landmarks. Because palpable landmarks are used, consistent results can be easily obtained even in patients who present with a wide variety of anatomical variances which otherwise make this injection technique difficult and prone to failure. ImagesFigure 1Figure 3Figure 4Figure 5Figure 6 PMID:6597690
NASA Astrophysics Data System (ADS)
Beger, Richard D.; Buzatu, Dan A.; Wilkes, Jon G.
2002-10-01
A three-dimensional quantitative spectrometric data-activity relationship (3D-QSDAR) modeling technique which uses NMR spectral and structural information that is combined in a 3D-connectivity matrix has been developed. A 3D-connectivity matrix was built by displaying all possible assigned carbon NMR chemical shifts, carbon-to-carbon connections, and distances between the carbons. Two-dimensional 13C-13C COSY and 2D slices from the distance dimension of the 3D-connectivity matrix were used to produce a relationship among the 2D spectral patterns for polychlorinated dibenzofurans, dibenzodioxins, and biphenyls (PCDFs, PCDDs, and PCBs respectively) binding to the aryl hydrocarbon receptor (AhR). We refer to this technique as comparative structural connectivity spectral analysis (CoSCoSA) modeling. All CoSCoSA models were developed using forward multiple linear regression analysis of the predicted 13C NMR structure-connectivity spectral bins. A CoSCoSA model for 26 PCDFs had an explained variance (r2) of 0.93 and an average leave-four-out cross-validated variance (q4 2) of 0.89. A CoSCoSA model for 14 PCDDs produced an r2 of 0.90 and an average leave-two-out cross-validated variance (q2 2) of 0.79. One CoSCoSA model for 12 PCBs gave an r2 of 0.91 and an average q2 2 of 0.80. Another CoSCoSA model for all 52 compounds had an r2 of 0.85 and an average q4 2 of 0.52. Major benefits of CoSCoSA modeling include ease of development since the technique does not use molecular docking routines.
Non-local means denoising of dynamic PET images.
Dutta, Joyita; Leahy, Richard M; Li, Quanzheng
2013-01-01
Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while lowering the background noise variance.
The Transport of Density Fluctuations Throughout the Heliosphere
NASA Technical Reports Server (NTRS)
Zank, G. P.; Jetha, N.; Hu, Q.; Hunana, P.
2012-01-01
The solar wind is recognized as a turbulent magnetofluid, for which the properties of the turbulent velocity and magnetic field fluctuations are often described by the equations of incompressible magnetohydrodynamics (MHD). However, low-frequency density turbulence is also ubiquitous. On the basis of a nearly incompressible formulation of MHD in the expanding inhomogeneous solar wind, we derive the transport equation for the variance of the density fluctuations (Rho(exp 2)). The transport equation shows that density fluctuations behave as a passive scalar in the supersonic solar wind. In the absence of sources of density turbulence, such as within 1AU, the variance (Rho(exp 2)) approximates r(exp -4). In the outer heliosphere beyond 1 AU, the shear between fast and slow streams, the propagation of shocks, and the creation of interstellar pickup ions all act as sources of density turbulence. The model density fluctuation variance evolves with heliocentric distance within approximately 300 AU as (Rho(exp 2)) approximates r(exp -3.3) after which it flattens and then slowly increases. This is precisely the radial profile for the density fluctuation variance observed by Voyager 2. Using a different analysis technique, we confirm the radial profile for Rho(exp 2) of Bellamy, Cairns, & Smith using Voyager 2 data. We conclude that a passive scalar description for density fluctuations in the supersonic solar wind can explain the density fluctuation variance observed in both the inner and the outer heliosphere.
Detection of Vegetable Oil Variance Using Surface Plasmon Resonance (SPR) Technique
NASA Astrophysics Data System (ADS)
Supardianningsih; Panggabean, R. D.; Romadhon, I. A.; Laksono, F. D.; Nofianti, U.; Abraha, K.
2018-05-01
The difference between coconut oil, corn oil, olive oil, and palm oil has been detected using surface plasmon resonance (SPR) technique. This is a new method in material characterization that can be used to identify vegetable oil variance. The SPR curve was measured by SPR system consisting of optical instruments, mechanical instruments, Main UNIT, and user interface (computer). He-Ne laser beam of wavelength 633 nm was used as light source, while gold (Au) thin film evaporated on half cylinder prism was used as the base so that surface plasmon polariton (SPP) waves propagate at the interface. Tween-80 and PEG-400 are used as surfactant and co-surfactant to make water-oil emulsion from each sample. The sample was prepared with the ratio of oil: surfactant: co-surfactant as 1:2:1 and then stirred on the water to make emulsions. The angle shift was measured by the change of SPR angle from prism/Au/air system to prism/Au/water-oil emulsion. The different SPR angle of each sample has been detected in the various number of spray, a method that was used for depositing the emulsion. From this work, we conclude that the saturated fatty acid component was the most significant component that changes the refractive index in the vegetable oil in water emulsion that can be used to characterize the vegetable oil variance.
Least-squares dual characterization for ROI assessment in emission tomography
NASA Astrophysics Data System (ADS)
Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.
2013-06-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
Meta-analysis of the performance variation in broilers experimentally challenged by Eimeria spp.
Kipper, Marcos; Andretta, Ines; Lehnen, Cheila Roberta; Lovatto, Paulo Alberto; Monteiro, Silvia Gonzalez
2013-09-01
A meta-analysis was carried out to (1) study the relation of the variation in feed intake and weight gain in broilers infected with Eimeria acervulina, Eimeria maxima, Eimeria tenella, or a Pool of Eimeria species, and (2) to identify and to quantify the effects involved in the infection. A database of articles addressing the experimental infection with Coccidia in broilers was developed. These publications must present results of animal performance (weight gain, feed intake, and feed conversion ratio). The database was composed by 69 publications, totalling around 44 thousand animals. Meta-analysis followed three sequential analyses: graphical, correlation, and variance-covariance. The feed intake of the groups challenged by E. acervulina and E. tenella did not differ (P>0.05) to the control group. However, the feed intake in groups challenged by E. maxima and Pool showed an increase of 8% and 5% (P<0.05) in relation to the control group. Challenged groups presented a decrease (P<0.05) in weight gain compared with control groups. All challenged groups showed a reduction in weight gain, even when there was no reduction (P<0.05) in feed intake (adjustment through variance-covariance analysis). The feed intake variation in broilers infected with E. acervulina, E. maxima, E. tenella, or Pool showed a quadratic (P<0.05) influence over the variation in weight gain. In relation to the isolated effects, the challenges have an impact of less than 1% over the variance in feed intake and weight gain. However, the magnitude of the effects varied with Eimeria species, animal age, sex, and genetic line. In general the age effect is superior to the challenge effect, showing that age at the challenge is important to determine the impact of Eimeria infection. Copyright © 2013 Elsevier B.V. All rights reserved.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M; Hairston, W David; Ferris, Daniel P; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M.; Hairston, W. David; Ferris, Daniel P.; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses. PMID:28424600
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
The human as a detector of changes in variance and bandwidth
NASA Technical Reports Server (NTRS)
Curry, R. E.; Govindaraj, T.
1977-01-01
The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.
NASA Astrophysics Data System (ADS)
Gao, Z. Q.; Bian, L. G.; Chen, Z. G.; Sparrow, M.; Zhang, J. H.
2006-05-01
This paper describes the application of the variance method for flux estimation over a mixed agricultural region in China. Eddy covariance and flux variance measurements were conducted in a near-surface layer over a non-uniform land surface in the central plain of China from 7 June to 20 July 2002. During this period, the mean canopy height was about 0.50 m. The study site consisted of grass (10% of area), beans (15%), corn (15%) and rice (60%). Under unstable conditions, the standard deviations of temperature and water vapor density (normalized by appropriate scaling parameters), observed by a single instrument, followed the Monin-Obukhov similarity theory. The similarity constants for heat (C-T) and water vapor (C-q) were 1.09 and 1.49, respectively. In comparison with direct measurements using eddy covariance techniques, the flux variance method, on average, underestimated sensible heat flux by 21% and latent heat flux by 24%, which may be attributed to the fact that the observed slight deviations (20% or 30% at most) of the similarity "constants" may be within the expected range of variation of a single instrument from the generally-valid relations.
Zhu, Jiang; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K Kirk; Zhou, Qifa; Chen, Zhongping
2015-05-01
We report on a novel acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) technique for imaging shear wave and quantifying shear modulus under orthogonal acoustic radiation force (ARF) excitation using the optical coherence tomography (OCT) Doppler variance method. The ARF perpendicular to the OCT beam is produced by a remote ultrasonic transducer. A shear wave induced by ARF excitation propagates parallel to the OCT beam. The OCT Doppler variance method, which is sensitive to the transverse vibration, is used to measure the ARF-induced vibration. For analysis of the shear modulus, the Doppler variance method is utilized to visualize shear wave propagation instead of Doppler OCT method, and the propagation velocity of the shear wave is measured at different depths of one location with the M scan. In order to quantify shear modulus beyond the OCT imaging depth, we move ARF to a deeper layer at a known step and measure the time delay of the shear wave propagating to the same OCT imaging depth. We also quantitatively map the shear modulus of a cross-section in a tissue-equivalent phantom after employing the B scan.
A combination of selected mapping and clipping to increase energy efficiency of OFDM systems
Lee, Byung Moo; Rim, You Seung
2017-01-01
We propose an energy efficient combination design for OFDM systems based on selected mapping (SLM) and clipping peak-to-average power ratio (PAPR) reduction techniques, and show the related energy efficiency (EE) performance analysis. The combination of two different PAPR reduction techniques can provide a significant benefit in increasing EE, because it can take advantages of both techniques. For the combination, we choose the clipping and SLM techniques, since the former technique is quite simple and effective, and the latter technique does not cause any signal distortion. We provide the structure and the systematic operating method, and show the various analyzes to derive the EE gain based on the combined technique. Our analysis show that the combined technique increases the EE by 69% compared to no PAPR reduction, and by 19.34% compared to only using SLM technique. PMID:29023591
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
La Padula, Simone; Hersant, Barbara; Noel, Warren; Meningaud, Jean Paul
2018-05-01
As older people increasingly care for their body image and remain active longer, the demand for reduction mammaplasty is increasing in this population. Only a few studies of reduction mammaplasty have specifically focussed on the outcomes in elderly women. We developed a new breast reduction technique: the Liposuction-Assisted Four Pedicle-Based Breast Reduction (LAFPBR) that is especially indicated for elderly patients. The aim of this paper was to describe the LAFPBR technique and to determine whether it could be considered a safer option for elderly patients compared to the superomedial pedicle (SMP) technique. A retrospective study included sixty-two women aged 60 years and over who underwent bilateral breast reduction mammaplasty. Thirty-one patients underwent LAFPBR and 31 patients were operated using the SMP technique. Complications and patient satisfaction in both groups were analysed. Patient satisfaction was measured using a validated questionnaire: the client satisfaction questionnaire 8 (CSQ-8). The LAFPBR technique required less operating time, and avoided significant blood loss. Six minor complications were observed in SMP patients. No LAFPBR women developed a procedure-related complication. Patient satisfaction was high with a mean score of 29.65 in LAFPBR patients and 28.68 in SMP patients. The LAFPBR is an easy procedure that appears safer than SMP and results in a high satisfaction rate in elderly women. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Ahrens, Philipp; Sandmann, Gunther; Bauer, Jan; König, Benjamin; Martetschläger, Frank; Müller, Dirk; Siebenlist, Sebastian; Kirchhoff, Chlodwig; Neumaier, Markus; Biberthaler, Peter; Stöckle, Ulrich; Freude, Thomas
2012-09-01
Fractures of the tibial plateau are among the most severe injuries of the knee joint and lead to advanced gonarthrosis if the reduction does not restore perfect joint congruency. Many different reduction techniques focusing on open surgical procedures have been described in the past. In this context we would like to introduce a novel technique which was first tested in a cadaver setup and has undergone its successful first clinical application. Since kyphoplasty demonstrated effective ways of anatomical correction in spine fractures, we adapted the inflatable instruments and used the balloon technique to reduce depressed fragments of the tibial plateau. The technique enabled us to restore a congruent cartilage surface and bone reduction. In this technique we see a useful new method to reduce depressed fractures of the tibial plateau with the advantages of low collateral damage as it is known from minimally invasive procedures.
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Testing variance components by two jackknife methods
USDA-ARS?s Scientific Manuscript database
The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Three insulation methods to minimize intravenous fluid administration set heat loss.
Piek, Richardt; Stein, Christopher
2013-01-01
To assess the effect of three methods for insulating an intravenous (IV) fluid administration set on the temperature of warmed fluid delivered rapidly in a cold environment. The three chosen techniques for insulation of the IV fluid administration set involved enclosing the tubing of the set in 1) a cotton conforming bandage, 2) a reflective emergency blanket, and 3) a combination of technique 2 followed by technique 1. Intravenous fluid warmed to 44°C was infused through a 20-drop/mL 180-cm-long fluid administration set in a controlled environmental temperature of 5°C. Temperatures in the IV fluid bag, the distal end of the fluid administration set, and the environment were continuously measured with resistance thermosensors. Twenty repetitions were performed in four conditions, namely, a control condition (with no insulation) and the three different insulation methods described above. One-way analysis of variance was used to assess the mean difference in temperature between the IV fluid bag and the distal fluid administration set under the four conditions. In the control condition, a mean of 5.28°C was lost between the IV fluid bag and the distal end of the fluid administration set. There was a significant difference found between the four conditions (p < 0.001). A mean of 3.53°C was lost between the IV fluid bag and the distal end of the fluid administration set for both the bandage and reflective emergency blanket, and a mean of 3.06°C was lost when the two methods were combined. Using inexpensive and readily available materials to insulate a fluid administration set can result in a reduction of heat loss in rapidly infused, warmed IV fluid in a cold environment.
Haldavnekar, Richa Vivek; Tekur, Padmini; Nagarathna, Raghuram; Nagendra, Hongasandra Ramarao
2014-01-01
Background: Studies have shown that Integrated Yoga reduces pain, disability, anxiety and depression and increases spinal flexibility and quality-of-life in chronic low back pain (CLBP) patients. Objective: The objective of this study was to compare the effect of two yoga practices namely laghu shankha prakshalana (LSP) kriya, a yogic colon cleansing technique and back pain specific asanas (Back pain special technique [BST]) on pain, disability, spinal flexibility and state anxiety in patients with CLBP. Materials and Methods: In this randomized control (self as control) study, 40 in-patients (25 were males, 15 were females) between 25 and 70 years (44.05 ± 13.27) with CLBP were randomly assigned to receive LSP or BST sessions. The measurements were taken immediately before and after each session of either of the practices (30 min) in the same participant. Randomization was used to decide the day of the session (3rd or 5th day after admission) to ensure random distribution of the hang over effect of the two practices. Statistical analysis was performed using the repeated measures analysis of variance. Results: Significant group * time interaction (P < 0.001) was observed in 11 point numerical rating scale, spinal flexibility (on Leighton type Goniometer) and (straight leg raise test in both legs), Oswestry Disability Index, State Anxiety (XI component of Spieldberger's state and trait anxiety inventory. There was significantly (P < 0.001, between groups) better reduction in LSP than BST group on all variables. No adverse effects were reported by any participant. Conclusion: Clearing the bowel by yoga based colon cleansing technique (LSP) is safe and offers immediate analgesic effect with reduced disability, anxiety and improved spinal flexibility in patients with CLBP. PMID:25035620
Retention of denture bases fabricated by three different processing techniques – An in vivo study
Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen
2016-01-01
Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542
Measurement of absolute lung volumes by imaging techniques.
Clausen, J
1997-10-01
In this paper, the techniques available for estimating total lung capacities from standard chest radiographs in children and infants as well as adults are reviewed. These techniques include manual measurements using ellipsoid and planimetry techniques as well as computerized systems. Techniques are also available for making radiographic lung volume measurements from portable chest radiographs. There are inadequate data in the literature to support recommending one specific technique over another. Though measurements of lung volumes by radiographic, plethysmographic, gas dilution or washout techniques result in remarkably similar mean results when groups of normal subjects are tested, in patients with disease, the results of these different basic measurement techniques can differ significantly. Computed tomographic and magnetic resonance techniques can also be used to measure absolute lung volumes and offer the theoretical advantages that the results in individual subjects are less affected by variances of thoracic shape than are measurements made using conventional chest radiographs.
NASA Astrophysics Data System (ADS)
Rama Subbanna, S.; Suryakalavathi, M., Dr.
2017-08-01
This paper is an attempt to accomplish a performance analysis of the different control techniques on spikes reduction method applied on the medium frequency transformer based DC spot welding system. Spike reduction is an important factor to be considered while spot welding systems are concerned. During normal RSWS operation welding transformer’s magnetic core can become saturated due to the unbalanced resistances of both transformer secondary windings and different characteristics of output rectifier diodes, which causes current spikes and over-current protection switch-off of the entire system. The current control technique is a piecewise linear control technique that is inspired from the DC-DC converter control algorithms to register a novel spike reduction method in the MFDC spot welding applications. Two controllers that were used for the spike reduction portion of the overall applications involve the traditional PI controller and Optimized PI controller. Care is taken such that the current control technique would maintain a reduced spikes in the primary current of the transformer while it reduces the Total Harmonic Distortion. The performance parameter that is involved in the spikes reduction technique is the THD, Percentage of current spike reduction for both techniques. Matlab/SimulinkTM based simulation is carried out for the MFDC RSWS with KW and results are tabulated for the PI and Optimized PI controllers and a tradeoff analysis is carried out.
[Balloon osteoplasty as reduction technique in the treatment of tibial head fractures].
Freude, T; Kraus, T M; Sandmann, G H
2015-10-01
Tibial plateau fractures requiring surgery are severe injuries of the lower extremities. Depending on the fracture pattern, the age of the patient, the range of activity and the bone quality there is a broad variation in adequate treatment. This article reports on an innovative treatment concept to address split depression fractures (Schatzker type II) and depression fractures (Schatzker type III) of the tibial head using the balloon osteoplasty technique for fracture reduction. Using the balloon technique achieves a precise and safe fracture reduction. This internal osteoplasty combines a minimal invasive percutaneous approach with a gently rise of the depressed area and the associated protection of the stratum regenerativum below the articular cartilage surface. This article lights up the surgical procedure using the balloon technique in tibia depression fractures. Using the balloon technique a precise and safe fracture reduction can be achieved. This internal osteoplasty combines a minimally invasive percutaneous approach with a gentle raising of the depressed area and the associated protection of the regenerative layer below the articular cartilage surface. Fracture reduction by use of a tamper results in high peak forces over small areas, whereas by using the balloon the forces are distributed over a larger area causing less secondary stress to the cartilage tissue. This less invasive approach might help to achieve a better long-term outcome with decreased secondary osteoarthritis due to the precise and chondroprotective reduction technique.
Sisniega, A.; Zbijewski, W.; Badal, A.; Kyprianou, I. S.; Stayman, J. W.; Vaquero, J. J.; Siewerdsen, J. H.
2013-01-01
Purpose: The proliferation of cone-beam CT (CBCT) has created interest in performance optimization, with x-ray scatter identified among the main limitations to image quality. CBCT often contends with elevated scatter, but the wide variety of imaging geometry in different CBCT configurations suggests that not all configurations are affected to the same extent. Graphics processing unit (GPU) accelerated Monte Carlo (MC) simulations are employed over a range of imaging geometries to elucidate the factors governing scatter characteristics, efficacy of antiscatter grids, guide system design, and augment development of scatter correction. Methods: A MC x-ray simulator implemented on GPU was accelerated by inclusion of variance reduction techniques (interaction splitting, forced scattering, and forced detection) and extended to include x-ray spectra and analytical models of antiscatter grids and flat-panel detectors. The simulator was applied to small animal (SA), musculoskeletal (MSK) extremity, otolaryngology (Head), breast, interventional C-arm, and on-board (kilovoltage) linear accelerator (Linac) imaging, with an axis-to-detector distance (ADD) of 5, 12, 22, 32, 60, and 50 cm, respectively. Each configuration was modeled with and without an antiscatter grid and with (i) an elliptical cylinder varying 70–280 mm in major axis; and (ii) digital murine and anthropomorphic models. The effects of scatter were evaluated in terms of the angular distribution of scatter incident upon the detector, scatter-to-primary ratio (SPR), artifact magnitude, contrast, contrast-to-noise ratio (CNR), and visual assessment. Results: Variance reduction yielded improvements in MC simulation efficiency ranging from ∼17-fold (for SA CBCT) to ∼35-fold (for Head and C-arm), with the most significant acceleration due to interaction splitting (∼6 to ∼10-fold increase in efficiency). The benefit of a more extended geometry was evident by virtue of a larger air gap—e.g., for a 16 cm diameter object, the SPR reduced from 1.5 for ADD = 12 cm (MSK geometry) to 1.1 for ADD = 22 cm (Head) and to 0.5 for ADD = 60 cm (C-arm). Grid efficiency was higher for configurations with shorter air gap due to a broader angular distribution of scattered photons—e.g., scatter rejection factor ∼0.8 for MSK geometry versus ∼0.65 for C-arm. Grids reduced cupping for all configurations but had limited improvement on scatter-induced streaks and resulted in a loss of CNR for the SA, Breast, and C-arm. Relative contribution of forward-directed scatter increased with a grid (e.g., Rayleigh scatter fraction increasing from ∼0.15 without a grid to ∼0.25 with a grid for the MSK configuration), resulting in scatter distributions with greater spatial variation (the form of which depended on grid orientation). Conclusions: A fast MC simulator combining GPU acceleration with variance reduction provided a systematic examination of a range of CBCT configurations in relation to scatter, highlighting the magnitude and spatial uniformity of individual scatter components, illustrating tradeoffs in CNR and artifacts and identifying the system geometries for which grids are more beneficial (e.g., MSK) from those in which an extended geometry is the better defense (e.g., C-arm head imaging). Compact geometries with an antiscatter grid challenge assumptions of slowly varying scatter distributions due to increased contribution of Rayleigh scatter. PMID:23635285
Unsupervised classification of remote multispectral sensing data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.
The Outlier Detection for Ordinal Data Using Scalling Technique of Regression Coefficients
NASA Astrophysics Data System (ADS)
Adnan, Arisman; Sugiarto, Sigit
2017-06-01
The aims of this study is to detect the outliers by using coefficients of Ordinal Logistic Regression (OLR) for the case of k category responses where the score from 1 (the best) to 8 (the worst). We detect them by using the sum of moduli of the ordinal regression coefficients calculated by jackknife technique. This technique is improved by scalling the regression coefficients to their means. R language has been used on a set of ordinal data from reference distribution. Furthermore, we compare this approach by using studentised residual plots of jackknife technique for ANOVA (Analysis of Variance) and OLR. This study shows that the jackknifing technique along with the proper scaling may lead us to reveal outliers in ordinal regression reasonably well.
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Arjunan, Sridhar P; Kumar, Dinesh K; Bastos, Teodiano
2012-01-01
This study has investigated the effect of age on the fractal based complexity measure of muscle activity and variance in the force of isometric muscle contraction. Surface electromyogram (sEMG) and force of muscle contraction were recorded from 40 healthy subjects categorized into: Group 1: Young - age range 20-30; 10 Males and 10 Females, Group 2: Old - age range 55-70; 10 Males and 10 Females during isometric exercise at Maximum Voluntary contraction (MVC). The results show that there is a reduction in the complexity of surface electromyogram (sEMG) associated with aging. The results demonstrate that there is an increase in the coefficient of variance (CoV) of the force of muscle contraction and a decrease in complexity of sEMG for the Old age group when compared with the Young age group.
Bidra, Avinash S
2015-06-01
Bone reduction for maxillary fixed implant-supported prosthodontic treatment is often necessary to either gain prosthetic space or to conceal the prosthesis-tissue junction in patients with excessive gingival display (gummy smile). Inadequate bone reduction is often a cause of prosthetic failure due to material fractures, poor esthetics, or inability to perform oral hygiene procedures due to unfavorable ridge lap prosthetic contours. Various instruments and techniques are available for bone reduction. It would be helpful to have an accurate and efficient method for bone reduction at the time of surgery and subsequently create a smooth bony platform. This article presents a straightforward technique for systematic bone reduction by transferring the patient's maximum smile line, recorded clinically, to a clear radiographic smile guide for treatment planning using cone beam computed tomography (CBCT). The patient's smile line and the amount of required bone reduction are transferred clinically by marking bone with a sterile stationery graphite wood pencil at the time of surgery. This technique can help clinicians to accurately achieve the desired bone reduction during surgery, and provide confidence that the diagnostic and treatment planning goals have been achieved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less
Three-dimensional Monte Carlo calculation of atmospheric thermal heating rates
NASA Astrophysics Data System (ADS)
Klinger, Carolin; Mayer, Bernhard
2014-09-01
We present a fast Monte Carlo method for thermal heating and cooling rates in three-dimensional atmospheres. These heating/cooling rates are relevant particularly in broken cloud fields. We compare forward and backward photon tracing methods and present new variance reduction methods to speed up the calculations. For this application it turns out that backward tracing is in most cases superior to forward tracing. Since heating rates may be either calculated as the difference between emitted and absorbed power per volume or alternatively from the divergence of the net flux, both approaches have been tested. We found that the absorption/emission method is superior (with respect to computational time for a given uncertainty) if the optical thickness of the grid box under consideration is smaller than about 5 while the net flux divergence may be considerably faster for larger optical thickness. In particular, we describe the following three backward tracing methods: the first and most simple method (EMABS) is based on a random emission of photons in the grid box of interest and a simple backward tracing. Since only those photons which cross the grid box boundaries contribute to the heating rate, this approach behaves poorly for large optical thicknesses which are common in the thermal spectral range. For this reason, the second method (EMABS_OPT) uses a variance reduction technique to improve the distribution of the photons in a way that more photons are started close to the grid box edges and thus contribute to the result which reduces the uncertainty. The third method (DENET) uses the flux divergence approach where - in backward Monte Carlo - all photons contribute to the result, but in particular for small optical thickness the noise becomes large. The three methods have been implemented in MYSTIC (Monte Carlo code for the phYSically correct Tracing of photons In Cloudy atmospheres). All methods are shown to agree within the photon noise with each other and with a discrete ordinate code for a one-dimensional case. Finally a hybrid method is built using a combination of EMABS_OPT and DENET, and application examples are shown. It should be noted that for this application, only little improvement is gained by EMABS_OPT compared to EMABS.
Correlates of Injury-forced Work Reduction for Massage Therapists and Bodywork Practitioners.
Blau, Gary; Monos, Christopher; Boyer, Ed; Davis, Kathleen; Flanagan, Richard; Lopez, Andrea; Tatum, Donna S
2013-01-01
Injury-forced work reduction (IFWR) has been acknowledged as an all-too-common occurrence for massage therapists and bodywork practitioners (M & Bs). However, little prior research has specifically investigated demographic, work attitude, and perceptual correlates of IFWR among M & Bs. To test two hypotheses, H1 and H2. H1 is that the accumulated cost variables set ( e.g., accumulated costs, continuing education costs) will account for a significant amount of IFWR variance beyond control/demographic (e.g., social desirability response bias, gender, years in practice, highest education level) and work attitude/perception variables (e.g., job satisfaction, affective occupation commitment, occupation identification, limited occupation alternatives) sets. H2 is that the two exhaustion variables (i.e., physical exhaustion, work exhaustion) set will account for significant IFWR variance beyond control/demographic, work attitude/perception, and accumulated cost variables sets. An online survey sample of 2,079 complete-data M & Bs was collected. Stepwise regression analysis was used to test the study hypotheses. The research design first controlled for control/demographic (Step1) and work attitude/perception variables sets (Step 2), before then testing for the successive incremental impact of two variable sets, accumulated costs (Step 3) and exhaustion variables (Step 4) for explaining IFWR. RESULTS SUPPORTED BOTH STUDY HYPOTHESES: accumulated cost variables set (H1) and exhaustion variables set (H2) each significantly explained IFWR after the control/demographic and work attitude/perception variables sets. The most important correlate for explaining IFWR was higher physical exhaustion, but work exhaustion was also significant. It is not just physical "wear and tear", but also "mental fatigue", that can lead to IFWR for M & Bs. Being female, having more years in practice, and having higher continuing education costs were also significant correlates of IFWR. Lower overall levels of work exhaustion, physical exhaustion, and IFWR were found in the present sample. However, since both types of exhaustion significantly and positively impact IFWR, taking sufficient time between massages and, if possible, varying one's massage technique to replenish one's physical and mental energy seem important. Failure to take required continuing education units, due to high costs, also increases risk for IFWR. Study limitations and future research issues are discussed.
Lee, Jounghee; Park, Sohyun
2015-01-01
Objectives The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. Methods We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. Results This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. Conclusion To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods. PMID:27169011
Michael Hoppus; Stan Arner; Andrew Lister
2001-01-01
A reduction in variance for estimates of forest area and volume in the state of Connecticut was accomplished by stratifying FIA ground plots using raw, transformed and classified Landsat Thematic Mapper (TM) imagery. A US Geological Survey (USGS) Multi-Resolution Landscape Characterization (MRLC) vegetation cover map for Connecticut was used to produce a forest/non-...
Deconstructing Demand: The Anthropogenic and Climatic Drivers of Urban Water Consumption.
Hemati, Azadeh; Rippy, Megan A; Grant, Stanley B; Davis, Kristen; Feldman, David
2016-12-06
Cities in drought prone regions of the world such as South East Australia are faced with escalating water scarcity and security challenges. Here we use 72 years of urban water consumption data from Melbourne, Australia, a city that recently overcame a 12 year "Millennium Drought", to evaluate (1) the relative importance of climatic and anthropogenic drivers of urban water demand (using wavelet-based approaches) and (2) the relative contribution of various water saving strategies to demand reduction during the Millennium Drought. Our analysis points to conservation as a dominant driver of urban water savings (69%), followed by nonrevenue water reduction (e.g., reduced meter error and leaks in the potable distribution system; 29%), and potable substitution with alternative sources like rain or recycled water (3%). Per-capita consumption exhibited both climatic and anthropogenic signatures, with rainfall and temperature explaining approximately 55% of the variance. Anthropogenic controls were also strong (up to 45% variance explained). These controls were nonstationary and frequency-specific, with conservation measures like outdoor water restrictions impacting seasonal water use and technological innovation/changing social norms impacting lower frequency (baseline) use. The above-noted nonstationarity implies that wavelets, which do not assume stationarity, show promise for use in future predictive models of demand.
NASA Astrophysics Data System (ADS)
Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.
2012-11-01
Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.
Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.
Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John
2008-02-01
A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.
Using negative emotional feedback to modify risky behavior of young moped riders.
Megías, Alberto; Cortes, Abilio; Maldonado, Antonio; Cándido, Antonio
2017-05-19
The aim of this research was to investigate whether the use of messages with negative emotional content is effective in promoting safe behavior of moped riders and how exactly these messages modulate rider behavior. Participants received negative feedback when performing risky behaviors using a computer task. The effectiveness of this treatment was subsequently tested in a riding simulator. The results demonstrated how riders receiving negative feedback had a lower number of traffic accidents than a control group. The reduction in accidents was accompanied by a set of changes in the riding behavior. We observed a lower average speed and greater respect for speed limits. Furthermore, analysis of the steering wheel variance, throttle variance, and average braking force provided evidence for a more even and homogenous riding style. This greater abidance of traffic regulations and friendlier riding style could explain some of the causes behind the reduction in accidents. The use of negative emotional feedback in driving schools or advanced rider assistance systems could enhance riding performance, making riders aware of unsafe practices and helping them to establish more accurate riding habits. Moreover, the combination of riding simulators and feedback-for example, in the training of novice riders and traffic offenders-could be an efficient tool to improve their hazard perception skills and promote safer behaviors.
Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Ferris, A. T. Judy
1999-01-01
This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
NASA Astrophysics Data System (ADS)
Rock, N. M. S.; Duffy, T. R.
REGRES allows a range of regression equations to be calculated for paired sets of data values in which both variables are subject to error (i.e. neither is the "independent" variable). Nonparametric regressions, based on medians of all possible pairwise slopes and intercepts, are treated in detail. Estimated slopes and intercepts are output, along with confidence limits, Spearman and Kendall rank correlation coefficients. Outliers can be rejected with user-determined stringency. Parametric regressions can be calculated for any value of λ (the ratio of the variances of the random errors for y and x)—including: (1) major axis ( λ = 1); (2) reduced major axis ( λ = variance of y/variance of x); (3) Y on Xλ = infinity; or (4) X on Y ( λ = 0) solutions. Pearson linear correlation coefficients also are output. REGRES provides an alternative to conventional isochron assessment techniques where bivariate normal errors cannot be assumed, or weighting methods are inappropriate.
How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?
Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J
2016-04-05
Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.
Variance of transionospheric VLF wave power absorption
NASA Astrophysics Data System (ADS)
Tao, X.; Bortnik, J.; Friedrich, M.
2010-07-01
To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.
Associations of gender inequality with child malnutrition and mortality across 96 countries.
Marphatia, A A; Cole, T J; Grijalva-Eternod, C; Wells, J C K
2016-01-01
National efforts to reduce low birth weight (LBW) and child malnutrition and mortality prioritise economic growth. However, this may be ineffective, while rising gross domestic product (GDP) also imposes health costs, such as obesity and non-communicable disease. There is a need to identify other potential routes for improving child health. We investigated associations of the Gender Inequality Index (GII), a national marker of women's disadvantages in reproductive health, empowerment and labour market participation, with the prevalence of LBW, child malnutrition (stunting and wasting) and mortality under 5 years in 96 countries, adjusting for national GDP. The GII displaced GDP as a predictor of LBW, explaining 36% of the variance. Independent of GDP, the GII explained 10% of the variance in wasting and stunting and 41% of the variance in child mortality. Simulations indicated that reducing GII could lead to major reductions in LBW, child malnutrition and mortality in low- and middle-income countries. Independent of national wealth, reducing women's disempowerment relative to men may reduce LBW and promote child nutritional status and survival. Longitudinal studies are now needed to evaluate the impact of efforts to reduce societal gender inequality.
Retrospective analysis of a detector fault for a full field digital mammography system
NASA Astrophysics Data System (ADS)
Marshall, N. W.
2006-11-01
This paper describes objective and subjective image quality measurements acquired as part of a routine quality assurance (QA) programme for an amorphous selenium (a-Se) full field digital mammography (FFDM) system between August-04 and February-05. During this period, the FFDM detector developed a fault and was replaced. A retrospective analysis of objective image quality parameters (modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE)) is presented to try and gain a deeper understanding of the detector problem that occurred. These measurements are discussed in conjunction with routine contrast-detail (c-d) results acquired with the CDMAM (Artinis, The Netherlands) test object. There was significant reduction in MTF over this period of time indicating an increase in blurring occurring within the a-Se converter layer. This blurring was not isotropic, being greater in the data line direction (left to right across the detector) than in the gate line direction (chest wall to nipple). The initial value of the 50% MTF point was 6 mm-1; for the faulty detector the 50% MTF points occurred at 3.4 mm-1 and 1.0 mm-1 in the gate line and data line directions, respectively. Prior to NNPS estimation, variance images were formed of the detector flat field images. Spatial distribution of variance was not uniform, suggesting that the physical blurring process was not constant across the detector. This change in variance with image position implied that the stationarity of the noise statistics within the image was limited and that care would be needed when performing objective measurements. The NNPS measurements confirmed the results found for the MTF, with a strong reduction in NNPS as a function of spatial frequency. This reduction was far more severe in the data line direction. A somewhat tentative DQE estimate was made; in the gate line direction there was little change in DQE up to 2.5 mm-1 but at the Nyquist frequency the DQE had fallen to approximately 35% of the original value. There was severe attenuation of DQE in the data line direction, the DQE falling to less than 0.01 above approximately 3.0 mm-1. C-d results showed an increase in threshold contrast of approximately 25% for details less than 0.2 mm in diameter, while no reduction in c-d performance was found at the largest detail diameters (1.0 mm and above). Despite the detector fault, the c-d curve was found to pass the European protocol acceptable c-d curve.
Validating Variance Similarity Functions in the Entrainment Zone
NASA Astrophysics Data System (ADS)
Osman, M.; Turner, D. D.; Heus, T.; Newsom, R. K.
2017-12-01
In previous work, the water vapor variance in the entrainment zone was proposed to be proportional to the convective velocity scale, gradient water vapor mixing ratio and the Brunt-Vaisala frequency in the interfacial layer, while the variance of the vertical wind at in the entrainment zone was defined in terms of the convective velocity scale. The variances in the entrainment zone have been hypothesized to depend on two distinct functions, which also depend on the Richardson number. To the best of our knowledge, these hypotheses have never been tested observationally. Simultaneous measurements of the Eddy correlation surface flux, wind shear profiles from wind profilers, and variance profile measurements of vertical motions and water vapor by Doppler and Raman lidars, respectively, provide a unique opportunity to thoroughly examine the functions used in defining the variances and validate them. These observations were made over the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site. We have identified about 30 cases from 2016 during which the convective boundary layer (CBL) is quasi-stationary and well mixed for at least 2 hours. The vertical profiles of turbulent fluctuations of the vertical wind and water vapor have been derived using an auto covariance technique to separate out the instrument random error to a set of 2-h period time series. The error analysis of the lidars observations demonstrates that the lidars are capable of resolving the vertical structure of turbulence around the entrainment zone. Therefore, utilizing this unique combination of observations, this study focuses on extensively testing the hypotheses that the second-order moments are indeed proportional to the functions which also depend on Richardson number. The coefficients that are used in defining the functions will also be determined observationally and compared against with the values suggested by Large eddy simulation (LES) studies.
Effective Analysis of Reaction Time Data
ERIC Educational Resources Information Center
Whelan, Robert
2008-01-01
Most analyses of reaction time (RT) data are conducted by using the statistical techniques with which psychologists are most familiar, such as analysis of variance on the sample mean. Unfortunately, these methods are usually inappropriate for RT data, because they have little power to detect genuine differences in RT between conditions. In…
Spectral mixture modeling: Further analysis of rock and soil types at the Viking Lander sites
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.
1987-01-01
A new image processing technique was applied to Viking Lander multispectral images. Spectral endmembers were defined that included soil, rock and shade. Mixtures of these endmembers were found to account for nearly all the spectral variance in a Viking Lander image.
Group Matching: Is This a Research Technique to Be Avoided?
ERIC Educational Resources Information Center
Ross, Donald C.; Klein, Donald F.
1988-01-01
The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)
New Statistical Techniques for Evaluating Longitudinal Models.
ERIC Educational Resources Information Center
Murray, James R.; Wiley, David E.
A basic methodological approach in developmental studies is the collection of longitudinal data. Behavioral data cen take at least two forms, qualitative (or discrete) and quantitative. Both types are fallible. Measurement errors can occur in quantitative data and measures of these are based on error variance. Qualitative or discrete data can…
Longitudinal Factor Score Estimation Using the Kalman Filter.
ERIC Educational Resources Information Center
Oud, Johan H.; And Others
1990-01-01
How longitudinal factor score estimation--the estimation of the evolution of factor scores for individual examinees over time--can profit from the Kalman filter technique is described. The Kalman estimates change more cautiously over time, have lower estimation error variances, and reproduce the LISREL program latent state correlations more…
USDA-ARS?s Scientific Manuscript database
Soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly with respect to their signal, noise, and/or combined time-series variability. Minimizing differences in signal variances is particularly important in data assimilation techniques to optimize the accuracy of the analysis obt...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Identification of Best Technology, Treatment Techniques or Other Means... community water systems and non-transient, non-community water systems to install and/or use any treatment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements of part 141, subpart H-Filtration and Disinfection. 142.64 Section 142.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Identification of Best Technology, Treatment Techniques or Other Means...
Survey of Munitions Response Technologies
2006-06-01
3-34 3.3.4 Digital Data Processing .......................................................................... 3-36 4.0 SOURCE DATA AND METHODS...6-4 6.1.6 DGM versus Mag and Flag Processes ..................................................... 6-5 6.1.7 Translation to...signatures, surface clutter, variances in operator technique, target selection, and data processing all degrade from and affect optimum performance
40 CFR 142.60 - Variances from the maximum contaminant level for total trihalomethanes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... level for total trihalomethanes. 142.60 Section 142.60 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Identification of Best Technology, Treatment Techniques or Other Means Generally Available § 142..., pursuant to section 1415(a)(1)(A) of the Act, hereby identifies the following as the best technology...
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
[Essential hypertension and stress. When do yoga, psychotherapy and autogenic training help?].
Herrmann, J M
2002-05-09
Psychosocial factors play an important role in the development and course of essential hypertension, although "stress" can account for only 10% of blood pressure variance. A variety of psychotherapeutic interventions, such as relaxation techniques (autogenic training or progressive muscular relaxation), behavioral therapy or biofeedback techniques, can lower elevated blood pressure by an average of 10 mmHg (systolic) and 5 mmHg (diastolic). As a "secondary effect", such measures may also prompt the hypertensive to adopt a more health-conscious lifestyle.
Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map
NASA Astrophysics Data System (ADS)
Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.
2016-03-01
Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Cryopreservation of Fish Sperm
NASA Astrophysics Data System (ADS)
Kurokura, Hisashi
Present status of research activities in cryopreservation of fish gamete in aquaculture field was introduced. More than 59 fish species have been reported in the research histories and nearly half of them were studied during recent 10 years. This means that the research activities are increasing, though commercial profit have not obtained yet. Fish species of which sperm can successfully cryopreserved is still limited comparing to numerous species in telost. One of the major obstacle for improvement of the technique is existence of wide specie specific variance in the freezing tolerance of fish sperm. The varianc can possibly be explaind thorugh the informations obtained by the studies in comparative spermatology, which is recently activated field in fish biology.
Hallberg, L R; Johnsson, T; Axelsson, A
1993-01-01
By using a modified stepwise regression analysis technique, the structure of self-perceived handicap and tinnitus annoyance in 89 males with noise-induced hearing loss was described. Handicap was related to three clusters of variables, reflecting individual, environmental, and socioeconomic aspects, and 60% of the variance in self-perceived handicap was explained by the representatives of these clusters: i.e. 'acceptance of hearing problems', 'social support related to tinnitus' and 'years of education'. Tinnitus had no impact of its own on self-perceived handicap and only a modest portion (36%) of the variance in tinnitus annoyance was explained by 'sleep disturbance' and 'auditory perceptual difficulties'.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Ma, Kaifeng; Sun, Lidan; Cheng, Tangren; Pan, Huitang; Wang, Jia; Zhang, Qixiang
2018-01-01
Increasing evidence shows that epigenetics plays an important role in phenotypic variance. However, little is known about epigenetic variation in the important ornamental tree Prunus mume. We used amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified polymorphism (MSAP) techniques, and association analysis and sequencing to investigate epigenetic variation and its relationships with genetic variance, environment factors, and traits. By performing leaf sampling, the relative total methylation level (29.80%) was detected in 96 accessions of P. mume. And the relative hemi-methylation level (15.77%) was higher than the relative full methylation level (14.03%). The epigenetic diversity (I∗ = 0.575, h∗ = 0.393) was higher than the genetic diversity (I = 0.484, h = 0.319). The cultivated population displayed greater epigenetic diversity than the wild populations in both southwest and southeast China. We found that epigenetic variance and genetic variance, and environmental factors performed cooperative structures, respectively. In particular, leaf length, width and area were positively correlated with relative full methylation level and total methylation level, indicating that the DNA methylation level played a role in trait variation. In total, 203 AFLP and 423 MSAP associated markers were detected and 68 of them were sequenced. Homologous analysis and functional prediction suggested that the candidate marker-linked genes were essential for leaf morphology development and metabolism, implying that these markers play critical roles in the establishment of leaf length, width, area, and ratio of length to width. PMID:29441078
Ma, Kaifeng; Sun, Lidan; Cheng, Tangren; Pan, Huitang; Wang, Jia; Zhang, Qixiang
2018-01-01
Increasing evidence shows that epigenetics plays an important role in phenotypic variance. However, little is known about epigenetic variation in the important ornamental tree Prunus mume . We used amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified polymorphism (MSAP) techniques, and association analysis and sequencing to investigate epigenetic variation and its relationships with genetic variance, environment factors, and traits. By performing leaf sampling, the relative total methylation level (29.80%) was detected in 96 accessions of P . mume . And the relative hemi-methylation level (15.77%) was higher than the relative full methylation level (14.03%). The epigenetic diversity ( I ∗ = 0.575, h ∗ = 0.393) was higher than the genetic diversity ( I = 0.484, h = 0.319). The cultivated population displayed greater epigenetic diversity than the wild populations in both southwest and southeast China. We found that epigenetic variance and genetic variance, and environmental factors performed cooperative structures, respectively. In particular, leaf length, width and area were positively correlated with relative full methylation level and total methylation level, indicating that the DNA methylation level played a role in trait variation. In total, 203 AFLP and 423 MSAP associated markers were detected and 68 of them were sequenced. Homologous analysis and functional prediction suggested that the candidate marker-linked genes were essential for leaf morphology development and metabolism, implying that these markers play critical roles in the establishment of leaf length, width, area, and ratio of length to width.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Ho, Kai-Yu; Epstein, Ryan; Garcia, Ron; Riley, Nicole; Lee, Szu-Ping
2017-02-01
Study Design Controlled laboratory study. Background Although it has been theorized that patellofemoral joint (PFJ) taping can correct patellar malalignment, the effects of PFJ taping techniques on patellar alignment and contact area have not yet been studied during weight bearing. Objective To examine the effects of 2 taping approaches (Kinesio and McConnell) on PFJ alignment and contact area. Methods Fourteen female subjects with patellofemoral pain and PFJ malalignment participated. Each subject underwent a pretaping magnetic resonance imaging (MRI) scan session and 2 MRI scan sessions after the application of the 2 taping techniques, which aimed to correct lateral patellar displacement. Subjects were asked to report their pain level prior to each scan session. During MRI assessment, subjects were loaded with 25% of body weight on their involved/more symptomatic leg at 0°, 20°, and 40° of knee flexion. The outcome measures included patellar lateral displacement (bisect-offset [BSO] index), mediolateral patellar tilt angle, patellar height (Insall-Salvati ratio), contact area, and pain. Patellofemoral joint alignment and contact area were compared among the 3 conditions (no tape, Kinesio, and McConnell) at 3 knee angles using a 2-factor, repeated-measures analysis of variance. Pain was compared among the 3 conditions using the Friedman test and post hoc Wilcoxon signed-rank tests. Results Our data did not reveal any significant effects of either McConnell or Kinesio taping on the BSO index, patellar tilt angle, Insall-Salvati ratio, or contact area across the 3 knee angles, whereas knee angle had a significant effect on the BSO index and contact area. A reduction in pain was observed after the application of the Kinesio taping technique. Conclusion In a weight-bearing condition, this preliminary study did not support the use of PFJ taping as a medial correction technique to alter the PFJ contact area or alignment of the patella. J Orthop Sports Phys Ther 2017;47(2):115-123. doi:10.2519/jospt.2017.6936.
NASA Astrophysics Data System (ADS)
Niemi, Sami-Matias; Kitching, Thomas D.; Cropper, Mark
2015-12-01
One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.
Development of a software package for solid-angle calculations using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai
2014-02-01
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
TOPSIS based parametric optimization of laser micro-drilling of TBC coated nickel based superalloy
NASA Astrophysics Data System (ADS)
Parthiban, K.; Duraiselvam, Muthukannan; Manivannan, R.
2018-06-01
The technique for order of preference by similarity ideal solution (TOPSIS) approach was used for optimizing the process parameters of laser micro-drilling of nickel superalloy C263 with Thermal Barrier Coating (TBC). Plasma spraying was used to deposit the TBC and a pico-second Nd:YAG pulsed laser was used to drill the specimens. Drilling angle, laser scan speed and number of passes were considered as input parameters. Based on the machining conditions, Taguchi L8 orthogonal array was used for conducting the experimental runs. The surface roughness and surface crack density (SCD) were considered as the output measures. The surface roughness was measured using 3D White Light Interferometer (WLI) and the crack density was measured using Scanning Electron Microscope (SEM). The optimized result achieved from this approach suggests reduced surface roughness and surface crack density. The holes drilled at an inclination angle of 45°, laser scan speed of 3 mm/s and 400 number of passes found to be optimum. From the Analysis of variance (ANOVA), inclination angle and number of passes were identified as the major influencing parameter. The optimized parameter combination exhibited a 19% improvement in surface finish and 12% reduction in SCD.
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-06-01
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems
Kollár, Zsolt
2014-01-01
This paper investigates filter bankmulticarrier (FBMC), a multicarrier modulation technique exhibiting an extremely low adjacent channel leakage ratio (ACLR) compared to conventional orthogonal frequency division multiplexing (OFDM) technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR) is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER). PMID:24558338
Reductive Augmentation of the Breast.
Chasan, Paul E
2018-06-01
Although breast reduction surgery plays an invaluable role in the correction of macromastia, it almost always results in a breast lacking in upper pole fullness and/or roundness. We present a technique of breast reduction combined with augmentation termed "reductive augmentation" to solve this problem. The technique is also extremely useful for correcting breast asymmetry, as well as revising significant pseudoptosis in the patient who has previously undergone breast augmentation with or without mastopexy. An evolution of techniques has been used to create a breast with more upper pole fullness and anterior projection in those patients desiring a more round, higher-profile appearance. Reductive augmentation is a one-stage procedure in which a breast augmentation is immediately followed by a modified superomedial pedicle breast reduction. Often, the excision of breast tissue is greater than would normally be performed with breast reduction alone. Thirty-five patients underwent reductive augmentation, of which 12 were primary surgeries and 23 were revisions. There was an average tissue removal of 255 and 227 g, respectively, per breast for the primary and revision groups. Six of the reductive augmentations were performed for gross asymmetry. Fourteen patients had a previous mastopexy, and 3 patients had a previous breast reduction. The average follow-up was 26 months. Reductive augmentation is an effective one-stage method for achieving a more round-appearing breast with upper pole fullness both in primary breast reduction candidates and in revisionary breast surgery. This technique can also be applied to those patients with significant asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Bates, S; Jonaitis, D; Nail, S
2013-10-01
Total X-ray Powder Diffraction Analysis (TXRPD) using transmission geometry was able to observe significant variance in measured powder patterns for sucrose lyophilizates with differing residual water contents. Integrated diffraction intensity corresponding to the observed variances was found to be linearly correlated to residual water content as measured by an independent technique. The observed variance was concentrated in two distinct regions of the lyophilizate powder pattern, corresponding to the characteristic sucrose matrix double halo and the high angle diffuse region normally associated with free-water. Full pattern fitting of the lyophilizate powder patterns suggested that the high angle variance was better described by the characteristic diffraction profile of a concentrated sucrose/water system rather than by the free-water diffraction profile. This suggests that the residual water in the sucrose lyophilizates is intimately mixed at the molecular level with sucrose molecules forming a liquid/solid solution. The bound nature of the residual water and its impact on the sucrose matrix gives an enhanced diffraction response between 3.0 and 3.5 beyond that expected for free-water. The enhanced diffraction response allows semi-quantitative analysis of residual water contents within the studied sucrose lyophilizates to levels below 1% by weight. Copyright © 2013 Elsevier B.V. All rights reserved.
Sources and implications of whole-brain fMRI signals in humans
Power, Jonathan D; Plitt, Mark; Laumann, Timothy O; Martin, Alex
2016-01-01
Whole-brain fMRI signals are a subject of intense interest: variance in the global fMRI signal (the spatial mean of all signals in the brain) indexes subject arousal, and psychiatric conditions such as schizophrenia and autism have been characterized by differences in the global fMRI signal. Further, vigorous debates exist on whether global signals ought to be removed from fMRI data. However, surprisingly little research has focused on the empirical properties of whole-brain fMRI signals. Here we map the spatial and temporal properties of the global signal, individually, in 1000+ fMRI scans. Variance in the global fMRI signal is strongly linked to head motion, to hardware artifacts, and to respiratory patterns and their attendant physiologic changes. Many techniques used to prepare fMRI data for analysis fail to remove these uninteresting kinds of global signal fluctuations. Thus, many studies include, at the time of analysis, prominent global effects of yawns, breathing changes, and head motion, among other signals. Such artifacts will mimic dynamic neural activity and will spuriously alter signal covariance throughout the brain. Methods capable of isolating and removing global artifactual variance while preserving putative “neural” variance are needed; this paper adopts no position on the topic of global signal regression. PMID:27751941
Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz
2015-05-01
The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.
Reducing the number of reconstructions needed for estimating channelized observer performance
NASA Astrophysics Data System (ADS)
Pineda, Angel R.; Miedema, Hope; Brenner, Melissa; Altaf, Sana
2018-03-01
A challenge for task-based optimization is the time required for each reconstructed image in applications where reconstructions are time consuming. Our goal is to reduce the number of reconstructions needed to estimate the area under the receiver operating characteristic curve (AUC) of the infinitely-trained optimal channelized linear observer. We explore the use of classifiers which either do not invert the channel covariance matrix or do feature selection. We also study the assumption that multiple low contrast signals in the same image of a non-linear reconstruction do not significantly change the estimate of the AUC. We compared the AUC of several classifiers (Hotelling, logistic regression, logistic regression using Firth bias reduction and the least absolute shrinkage and selection operator (LASSO)) with a small number of observations both for normal simulated data and images from a total variation reconstruction in magnetic resonance imaging (MRI). We used 10 Laguerre-Gauss channels and the Mann-Whitney estimator for AUC. For this data, our results show that at small sample sizes feature selection using the LASSO technique can decrease bias of the AUC estimation with increased variance and that for large sample sizes the difference between these classifiers is small. We also compared the use of multiple signals in a single reconstructed image to reduce the number of reconstructions in a total variation reconstruction for accelerated imaging in MRI. We found that AUC estimation using multiple low contrast signals in the same image resulted in similar AUC estimates as doing a single reconstruction per signal leading to a 13x reduction in the number of reconstructions needed.
Seifi, Massoud; Ghoraishian, Seyed Ahmad
2012-01-01
Background: Socket preservation after tooth extraction is one of the indications of bone grafting to enhance preorthodontic condition. The aim of this study is to determine the effects of socket preservation on the immediate tooth movement, alveolar ridge height preservation and orthodontic root resorption. Materials and Methods: In a split-mouth technique, twelve sites in three dogs were investigated as an experimental study. Crushed demineralized freeze-dried bone allograft (DFDBA) (CenoBone®) was used as the graft material. The defects were made by the extraction of 3rd premolar. On one side of each jaw, the defects were preserved by DFDBA and defects of the other side left opened as the control group. Simultaneously the teeth adjacent to the defects were pulled together by a NiTi coil spring. After eight weeks, the amount of (OTM), alveolar height, and root resorption were measured. Analysis of variance was used for purpose of comparison. Results: There was a slight increase in OTM at grafted sites as they were compared to the control sites (P<0.05). Also a significant bone resorption in control site and successful socket preservation in experimental site were observed. Reduction of root resorption at the augmented site was significant compared to the normal healing site (P<0.05). Conclusion: Using socket preservation, tooth movement can be immediately started without waiting for the healing of the recipient site. This can provide some advantages like enhanced rate of OTM, its approved effects on ridge preservation that reduces the chance of dehiscence and the reduction of root resorption. PMID:22623939
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less
Dose tracking and dose auditing in a comprehensive computed tomography dose-reduction program.
Duong, Phuong-Anh; Little, Brent P
2014-08-01
Implementation of a comprehensive computed tomography (CT) radiation dose-reduction program is a complex undertaking, requiring an assessment of baseline doses, an understanding of dose-saving techniques, and an ongoing appraisal of results. We describe the role of dose tracking in planning and executing a dose-reduction program and discuss the use of the American College of Radiology CT Dose Index Registry at our institution. We review the basics of dose-related CT scan parameters, the components of the dose report, and the dose-reduction techniques, showing how an understanding of each technique is important in effective auditing of "outlier" doses identified by dose tracking. Copyright © 2014 Elsevier Inc. All rights reserved.
Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries
NASA Astrophysics Data System (ADS)
De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.
This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Synopsis of timing measurement techniques used in telecommunications
NASA Technical Reports Server (NTRS)
Zampetti, George
1993-01-01
Historically, Maximum Time Interval Error (MTIE) and Maximum Relative Time Interval Error (MRTIE) have been the main measurement techniques used to characterize timing performance in telecommunications networks. Recently, a new measurement technique, Time Variance (TVAR) has gained acceptance in the North American (ANSI) standards body. TVAR was developed in concurrence with NIST to address certain inadequacies in the MTIE approach. The advantages and disadvantages of each of these approaches are described. Real measurement examples are presented to illustrate the critical issues in actual telecommunication applications. Finally, a new MTIE measurement is proposed (ZTIE) that complements TVAR. Together, TVAR and ZTIE provide a very good characterization of network timing.
Clark, Larkin; Wells, Martha H; Harris, Edward F; Lou, Jennifer
2016-01-01
To determine if aggressiveness of primary tooth preparation varied among different brands of zirconia and stainless steel (SSC) crowns. One hundred primary typodont teeth were divided into five groups (10 posterior and 10 anterior) and assigned to: Cheng Crowns (CC); EZ Pedo (EZP); Kinder Krowns (KKZ); NuSmile (NSZ); and SSC. Teeth were prepared, and assigned crowns were fitted. Teeth were weighed prior to and after preparation. Weight changes served as a surrogate measure of tooth reduction. Analysis of variance showed a significant difference in tooth reduction among brand/type for both the anterior and posterior. Tukey's honest significant difference test (HSD), when applied to anterior data, revealed that SSCs required significantly less tooth removal compared to the composite of the four zirconia brands, which showed no significant difference among them. Tukey's HSD test, applied to posterior data, revealed that CC required significantly greater removal of crown structure, while EZP, KKZ, and NSZ were statistically equivalent, and SSCs required significantly less removal. Zirconia crowns required more tooth reduction than stainless steel crowns for primary anterior and posterior teeth. Tooth reduction for anterior zirconia crowns was equivalent among brands. For posterior teeth, reduction for three brands (EZ Pedo, Kinder Krowns, NuSmile) did not differ, while Cheng Crowns required more reduction.
Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction
NASA Astrophysics Data System (ADS)
Scarnati, Theresa; Gelb, Anne
2018-04-01
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.
Individual and population-level responses to ocean acidification.
Harvey, Ben P; McKeown, Niall J; Rastrick, Samuel P S; Bertolini, Camilla; Foggo, Andy; Graham, Helen; Hall-Spencer, Jason M; Milazzo, Marco; Shaw, Paul W; Small, Daniel P; Moore, Pippa J
2016-01-29
Ocean acidification is predicted to have detrimental effects on many marine organisms and ecological processes. Despite growing evidence for direct impacts on specific species, few studies have simultaneously considered the effects of ocean acidification on individuals (e.g. consequences for energy budgets and resource partitioning) and population level demographic processes. Here we show that ocean acidification increases energetic demands on gastropods resulting in altered energy allocation, i.e. reduced shell size but increased body mass. When scaled up to the population level, long-term exposure to ocean acidification altered population demography, with evidence of a reduction in the proportion of females in the population and genetic signatures of increased variance in reproductive success among individuals. Such increased variance enhances levels of short-term genetic drift which is predicted to inhibit adaptation. Our study indicates that even against a background of high gene flow, ocean acidification is driving individual- and population-level changes that will impact eco-evolutionary trajectories.
Method for simulating dose reduction in digital mammography using the Anscombe transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Combinative Particle Size Reduction Technologies for the Production of Drug Nanocrystals
Salazar, Jaime; Müller, Rainer H.; Möschwitzer, Jan P.
2014-01-01
Nanosizing is a suitable method to enhance the dissolution rate and therefore the bioavailability of poorly soluble drugs. The success of the particle size reduction processes depends on critical factors such as the employed technology, equipment, and drug physicochemical properties. High pressure homogenization and wet bead milling are standard comminution techniques that have been already employed to successfully formulate poorly soluble drugs and bring them to market. However, these techniques have limitations in their particle size reduction performance, such as long production times and the necessity of employing a micronized drug as the starting material. This review article discusses the development of combinative methods, such as the NANOEDGE, H 96, H 69, H 42, and CT technologies. These processes were developed to improve the particle size reduction effectiveness of the standard techniques. These novel technologies can combine bottom-up and/or top-down techniques in a two-step process. The combinative processes lead in general to improved particle size reduction effectiveness. Faster production of drug nanocrystals and smaller final mean particle sizes are among the main advantages. The combinative particle size reduction technologies are very useful formulation tools, and they will continue acquiring importance for the production of drug nanocrystals. PMID:26556191
Factors That Attenuate the Correlation Coefficient and Its Analogs.
ERIC Educational Resources Information Center
Dolenz, Beverly
The correlation coefficient is an integral part of many other statistical techniques (analysis of variance, t-tests, etc.), since all analytic methods are actually correlational (G. V. Glass and K. D. Hopkins, 1984). The correlation coefficient is a statistical summary that represents the degree and direction of relationship between two variables.…
Using Structural Equation Models with Latent Variables to Study Student Growth and Development.
ERIC Educational Resources Information Center
Pike, Gary R.
1991-01-01
Analysis of data on freshman-to-senior developmental gains in 722 University of Tennessee-Knoxville students provides evidence of the advantages of structural equation modeling with latent variables and suggests that the group differences identified by traditional analysis of variance and covariance techniques may be an artifact of measurement…
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
Mobley et al. Turnover Model Reanalysis and Review of Existing Data.
ERIC Educational Resources Information Center
Dalessio, Anthony; And Others
Job satisfaction has been identified as one of the most important antecedents of turnover, although it rarely accounts for more than 16% of the variance in employee withdrawal. Several data sets collected on the Mobley, Horner, and Hollingsworth (1978) model of turnover were reanalyzed with path analytic techniques. Data analyses revealed support…
USDA-ARS?s Scientific Manuscript database
Eddy covariance (EC) is a well-established, non-intrusive observational technique that has long been used to measure the net carbon balance of numerous ecosystems including crop lands for perennial crops such as orchards and vineyards, and pasturelands. While EC measures net carbon fluxes well, it ...
Regression sampling: some results for resource managers and researchers
William G. O' Regan; Robert W. Boyd
1974-01-01
Regression sampling is widely used in natural resources management and research to estimate quantities of resources per unit area. This note brings together results found in the statistical literature in the application of this sampling technique. Conditional and unconditional estimators are listed and for each estimator, exact variances and unbiased estimators for the...
The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the er...
Statistics for People Who (Think They) Hate Statistics. Third Edition
ERIC Educational Resources Information Center
Salkind, Neil J.
2007-01-01
This text teaches an often intimidating and difficult subject in a way that is informative, personable, and clear. The author takes students through various statistical procedures, beginning with correlation and graphical representation of data and ending with inferential techniques and analysis of variance. In addition, the text covers SPSS, and…
The Effect of the Multivariate Box-Cox Transformation on the Power of MANOVA.
ERIC Educational Resources Information Center
Kirisci, Levent; Hsu, Tse-Chi
Most of the multivariate statistical techniques rely on the assumption of multivariate normality. The effects of non-normality on multivariate tests are assumed to be negligible when variance-covariance matrices and sample sizes are equal. Therefore, in practice, investigators do not usually attempt to remove non-normality. In this simulation…
A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
IsobariQ: software for isobaric quantitative proteomics using IPTL, iTRAQ, and TMT.
Arntzen, Magnus Ø; Koehler, Christian J; Barsnes, Harald; Berven, Frode S; Treumann, Achim; Thiede, Bernd
2011-02-04
Isobaric peptide labeling plays an important role in relative quantitative comparisons of proteomes. Isobaric labeling techniques utilize MS/MS spectra for relative quantification, which can be either based on the relative intensities of reporter ions in the low mass region (iTRAQ and TMT) or on the relative intensities of quantification signatures throughout the spectrum due to isobaric peptide termini labeling (IPTL). Due to the increased quantitative information found in MS/MS fragment spectra generated by the recently developed IPTL approach, new software was required to extract the quantitative information. IsobariQ was specifically developed for this purpose; however, support for the reporter ion techniques iTRAQ and TMT is also included. In addition, to address recently emphasized issues about heterogeneity of variance in proteomics data sets, IsobariQ employs the statistical software package R and variance stabilizing normalization (VSN) algorithms available therein. Finally, the functionality of IsobariQ is validated with data sets of experiments using 6-plex TMT and IPTL. Notably, protein substrates resulting from cleavage by proteases can be identified as shown for caspase targets in apoptosis.
Repeatability of paired counts.
Alexander, Neal; Bethony, Jeff; Corrêa-Oliveira, Rodrigo; Rodrigues, Laura C; Hotez, Peter; Brooker, Simon
2007-08-30
The Bland and Altman technique is widely used to assess the variation between replicates of a method of clinical measurement. It yields the repeatability, i.e. the value within which 95 per cent of repeat measurements lie. The valid use of the technique requires that the variance is constant over the data range. This is not usually the case for counts of items such as CD4 cells or parasites, nor is the log transformation applicable to zero counts. We investigate the properties of generalized differences based on Box-Cox transformations. For an example, in a data set of hookworm eggs counted by the Kato-Katz method, the square root transformation is found to stabilize the variance. We show how to back-transform the repeatability on the square root scale to the repeatability of the counts themselves, as an increasing function of the square mean root egg count, i.e. the square of the average of square roots. As well as being more easily interpretable, the back-transformed results highlight the dependence of the repeatability on the sample volume used.
WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, J; Zhu, L
Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol
2015-06-10
Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.
Blakely, Colin K; Bruno, Shaun R; Poltavets, Viktor V
2011-07-18
A chimie douce solvothermal reduction method is proposed for topotactic oxygen deintercalation of complex metal oxides. Four different reduction techniques were employed to qualitatively identify the relative reduction activity of each including reduction with H(2) and NaH, solution-based reduction using metal hydrides at ambient pressure, and reduction under solvothermal conditions. The reduction of the Ruddlesden-Popper nickelate La(4)Ni(3)O(10) was used as a test case to prove the validity of the method. The completely reduced phase La(4)Ni(3)O(8) was produced via the solvothermal technique at 150 °C--a lower temperature than by other more conventional solid state oxygen deintercalation methods.
Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136
Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh
2014-01-01
In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Martin, Murphy P; Rojas, David; Mauffrey, Cyril
2018-07-01
Tile C pelvic ring injuries are challenging to manage even in the most experienced hands. The majority of such injuries can be managed using percutaneous reduction techniques, and the posterior ring can be stabilized using percutaneous transiliac-transsacral screw fixation. However, a subgroup of patients present with inadequate bony corridors, significant sacral zone 2 comminution or significant lateral/vertical displacement of the hemipelvis through a complete sacral fracture. Percutaneous strategies in such circumstances can be dangerous. Those patients may benefit from prone positioning and open reduction of the sacral fracture with fixation through tension band plating or lumbo-pelvic fixation. Soft tissue handling is critical, and direct reduction techniques around the sacrum can be difficult due to the complex anatomy and the fragile nature of the sacrum making clamp placement and tightening a challenge. In this paper, we propose a mini-invasive technique of indirect reduction and temporary stabilization, which is soft tissue friendly and permits maintenance of reduction during definitive fixation surgical.
Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D
2016-06-15
The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.