Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Fractal structures and fractal functions as disease indicators
Escos, J.M; Alados, C.L.; Emlen, J.M.
1995-01-01
Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Advanced Booster Composite Case/Polybenzimidazole Nitrile Butadiene Rubber Insulation Development
NASA Technical Reports Server (NTRS)
Gentz, Steve; Taylor, Robert; Nettles, Mindy
2015-01-01
The NASA Engineering and Safety Center (NESC) was requested to examine processing sensitivities (e.g., cure temperature control/variance, debonds, density variations) of polybenzimidazole nitrile butadiene rubber (PBI-NBR) insulation, case fiber, and resin systems and to evaluate nondestructive evaluation (NDE) and damage tolerance methods/models required to support human-rated composite motor cases. The proposed use of composite motor cases in Blocks IA and II was expected to increase performance capability through optimizing operating pressure and increasing propellant mass fraction. This assessment was to support the evaluation of risk reduction for large booster component development/fabrication, NDE of low mass-to-strength ratio material structures, and solid booster propellant formulation as requested in the Space Launch System NASA Research Announcement for Advanced Booster Engineering Demonstration and/or Risk Reduction. Composite case materials and high-energy propellants represent an enabling capability in the Agency's ability to provide affordable, high-performing advanced booster concepts. The NESC team was requested to provide an assessment of co- and multiple-cure processing of composite case and PBI-NBR insulation materials and evaluation of high-energy propellant formulations.
Umegaki, Hiroyuki; Yanagawa, Madoka; Nonogaki, Zen; Nakashima, Hirotaka; Kuzuya, Masafumi; Endo, Hidetoshi
2014-01-01
We surveyed the care burden of family caregivers, their satisfaction with the services, and whether their care burden was reduced by the introduction of the LTCI care services. We randomly enrolled 3000 of 43,250 residents of Nagoya City aged 65 and over who had been certified as requiring long-term care and who used at least one type of service provided by the public LTCI; 1835 (61.2%) subjects returned the survey. A total of 1015 subjects for whom complete sets of data were available were employed for statistical analysis. Analysis of variance for the continuous variables and χ(2) analysis for that categorical variance were performed. Multiple logistic analysis was performed with the factors with p values of <0.2 in the χ(2) analysis of burden reduction. A total of 68.8% of the caregivers indicated that the care burden was reduced by the introduction of the LTCI care services, and 86.8% of the caregivers were satisfied with the LTCI care services. A lower age of caregivers, a more advanced need classification level, and more satisfaction with the services were independently associated with a reduction of the care burden. In Japanese LTCI, the overall satisfaction of the caregivers appears to be relatively high and is associated with the reduction of the care burden. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Importance Sampling Variance Reduction in GRESS ATMOSIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakeford, Daniel Tyler
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
Ex Post Facto Monte Carlo Variance Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, Thomas E.
The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less
Mitigation of multipath effect in GNSS short baseline positioning by the multipath hemispherical map
NASA Astrophysics Data System (ADS)
Dong, D.; Wang, M.; Chen, W.; Zeng, Z.; Song, L.; Zhang, Q.; Cai, M.; Cheng, Y.; Lv, J.
2016-03-01
Multipath is one major error source in high-accuracy GNSS positioning. Various hardware and software approaches are developed to mitigate the multipath effect. Among them the MHM (multipath hemispherical map) and sidereal filtering (SF)/advanced SF (ASF) approaches utilize the spatiotemporal repeatability of multipath effect under static environment, hence they can be implemented to generate multipath correction model for real-time GNSS data processing. We focus on the spatial-temporal repeatability-based MHM and SF/ASF approaches and compare their performances for multipath reduction. Comparisons indicate that both MHM and ASF approaches perform well with residual variance reduction (50 %) for short span (next 5 days) and maintains roughly 45 % reduction level for longer span (next 6-25 days). The ASF model is more suitable for high frequency multipath reduction, such as high-rate GNSS applications. The MHM model is easier to implement for real-time multipath mitigation when the overall multipath regime is medium to low frequency.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Reduction of variance in spectral estimates for correction of ultrasonic aberration.
Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C
2006-01-01
A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.
AN ASSESSMENT OF MCNP WEIGHT WINDOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. HENDRICKS; C. N. CULBERTSON
2000-01-01
The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Using negative emotional feedback to modify risky behavior of young moped riders.
Megías, Alberto; Cortes, Abilio; Maldonado, Antonio; Cándido, Antonio
2017-05-19
The aim of this research was to investigate whether the use of messages with negative emotional content is effective in promoting safe behavior of moped riders and how exactly these messages modulate rider behavior. Participants received negative feedback when performing risky behaviors using a computer task. The effectiveness of this treatment was subsequently tested in a riding simulator. The results demonstrated how riders receiving negative feedback had a lower number of traffic accidents than a control group. The reduction in accidents was accompanied by a set of changes in the riding behavior. We observed a lower average speed and greater respect for speed limits. Furthermore, analysis of the steering wheel variance, throttle variance, and average braking force provided evidence for a more even and homogenous riding style. This greater abidance of traffic regulations and friendlier riding style could explain some of the causes behind the reduction in accidents. The use of negative emotional feedback in driving schools or advanced rider assistance systems could enhance riding performance, making riders aware of unsafe practices and helping them to establish more accurate riding habits. Moreover, the combination of riding simulators and feedback-for example, in the training of novice riders and traffic offenders-could be an efficient tool to improve their hazard perception skills and promote safer behaviors.
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
Analysis of Radiation Transport Due to Activated Coolant in the ITER Neutral Beam Injection Cell
Royston, Katherine; Wilson, Stephen C.; Risner, Joel M.; ...
2017-07-26
Detailed spatial distributions of the biological dose rate due to a variety of sources are required for the design of the ITER tokamak facility to ensure that all radiological zoning limits are met. During operation, water in the Integrated loop of Blanket, Edge-localized mode and vertical stabilization coils, and Divertor (IBED) cooling system will be activated by plasma neutrons and will flow out of the bioshield through a complex system of pipes and heat exchangers. This paper discusses the methods used to characterize the biological dose rate outside the tokamak complex due to 16N gamma radiation emitted by the activatedmore » coolant in the Neutral Beam Injection (NBI) cell of the tokamak building. Activated coolant will enter the NBI cell through the IBED Primary Heat Transfer System (PHTS), and the NBI PHTS will also become activated due to radiation streaming through the NBI system. To properly characterize these gamma sources, the production of 16N, the decay of 16N, and the flow of activated water through the coolant loops were modeled. The impact of conservative approximations on the solution was also examined. Once the source due to activated coolant was calculated, the resulting biological dose rate outside the north wall of the NBI cell was determined through the use of sophisticated variance reduction techniques. The AutomateD VAriaNce reducTion Generator (ADVANTG) software implements methods developed specifically to provide highly effective variance reduction for complex radiation transport simulations such as those encountered with ITER. Using ADVANTG with the Monte Carlo N-particle (MCNP) radiation transport code, radiation responses were calculated on a fine spatial mesh with a high degree of statistical accuracy. In conclusion, advanced visualization tools were also developed and used to determine pipe cell connectivity, to facilitate model checking, and to post-process the transport simulation results.« less
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.
Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand
2017-12-15
The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less
Yu, Xingyue; Cabooter, Deirdre; Dewil, Raf
2018-05-24
This study aims at investigating the efficiency and kinetics of 2,4-DCP degradation via advanced reduction processes (ARP). Using UV light as activation method, the highest degradation efficiency of 2,4-DCP was obtained when using sulphite as a reducing agent. The highest degradation efficiency was observed under alkaline conditions (pH = 10.0), for high sulphite dosage and UV intensity, and low 2,4-DCP concentration. For all process conditions, first-order reaction rate kinetics were applicable. A quadratic polynomial equation fitted by a Box-Behnken Design was used as a statistical model and proved to be precise and reliable in describing the significance of the different process variables. The analysis of variance demonstrated that the experimental results were in good agreement with the predicted model (R 2 = 0.9343), and solution pH, sulphite dose and UV intensity were found to be key process variables in the sulphite/UV ARP. Consequently, the present study provides a promising approach for the efficient degradation of 2,4-DCP with fast degradation kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Genetic Variance in the F2 Generation of Divergently Selected Parents
M.P. Koshy; G. Namkoong; J.H. Roberds
1998-01-01
Either by selective breeding for population divergence or by using natural population differences, F2 and advanced generation hybrids can be developed with high variances. We relate the size of the genetic variance to the population divergence based on a forward and backward mutation model at a locus with two alleles with additive gene action....
Exploring factors affecting registered nurses' pursuit of postgraduate education in Australia.
Ng, Linda; Eley, Robert; Tuckett, Anthony
2016-12-01
The aim of this study was to explore the factors influencing registered nurses' pursuit of postgraduate education in specialty nursing practice in Australia. Despite the increased requirement for postgraduate education for advanced practice, little has been reported on the contributory factors involved in the decision to undertake further education. The Nurses' Attitudes Towards Postgraduate Education instrument was administered to 1632 registered nurses from the Nurses and Midwives e-Cohort Study across Australia, with a response rate of 35.9% (n = 568). Data reduction techniques using principal component analysis with varimax rotation were used. The analysis identified a three-factor solution for 14 items, accounting for 52.5% of the variance of the scale: "facilitators," "professional recognition," and "inhibiting factors." Facilitators of postgraduate education accounted for 28.5% of the variance, including: (i) improves knowledge; (ii) increases nurses' confidence in clinical decision-making; (iii) enhances nurses' careers; (iv) improves critical thinking; (v) improves nurses' clinical skill; and (vi) increased job satisfaction. This new instrument has potential clinical and research applications to support registered nurses' pursuit of postgraduate education. © 2016 John Wiley & Sons Australia, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Automated variance reduction for MCNP using deterministic methods.
Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B
2005-01-01
In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
Deflation as a method of variance reduction for estimating the trace of a matrix inverse
Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas
2017-04-06
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less
FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport
NASA Astrophysics Data System (ADS)
Munk, Madicken
The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.
[Perimetric changes in advanced glaucoma].
Feraru, Crenguta Ioana; Pantalon, Anca
2011-01-01
The evaluation of various perimetric aspects in advanced glaucoma stages correlated to morpho-functional changes. MATHERIAL AND METHOD: Retrospective clinical trial over a 10 months time period that included patients with advanced glaucoma stages, for which there have been recorded several computerised visual field tests (central 24-2 strategy, 10-2 strategy with either III or V--Goldman stimulus spot size) along with other morpho-funtional ocular paramaters: VA, lOP optic disk analysis. We included in our study 56 eyes from 45 patients. In most cases 89% it was an open angle glaucoma (either primary or secondary) Mean visual acuity was 0.45 +/- 0.28. Regarding the perimetric deficit 83% had advanced deficit, 9% moderate and 8% early visual changes. As perimetric type of defect we found a majority with general reduction of sensitivity (33 eyes) + ring shape scotoma. In 6 eyes (10.7%) having left only a central isle of vision we performed the central 10-2 strategy with III or V Goldmann stimulus spot size. Statistic analysis showed scarce correlation between the visual acuity and the quantitative perimetric parameters (MD and PSD), and variance analysis found present a multiple correlation parameter p = 0.07 that proves there is no liniary correspondence between the morpho-functional parameters: VA-MD(PSD) and C/D ratio. In advanced glaucoma stages, the perimetric changes are mostly severe. Perimetric evaluation is essential in these stages and needs to be individualised.
NASA Astrophysics Data System (ADS)
Lee, Yi-Kang
2017-09-01
Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.
Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia
2016-08-01
Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Control Variate Estimators of Survivor Growth from Point Samples
Francis A. Roesch; Paul C. van Deusen
1993-01-01
Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...
Monte Carlo isotopic inventory analysis for complex nuclear systems
NASA Astrophysics Data System (ADS)
Phruksarojanakun, Phiphat
Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
Advanced Communication Processing Techniques Held in Ruidoso, New Mexico on 14-17 May 1989
1990-01-01
Criteria: * Prob. of Detection and False Alarm * Variances of Parameter Estimators * Prob. of Correct Classiflcsation and Rejection 0 2 In the exposure...couple of criteria. The tell? [LAUGHTER] If it was anybody else, I standard Neyman-Pearson approach for de- wouldn’t say .... tection, variances for... VARIANCE AISJ11T UPPER AND0 LOWER PMIOUIESOES FEATUE---OELET!U FETUA1E----WW-4A140 TIME SEOLIENTIAL CORRELATION FEATUE -$-ESTIMATED INA FEATURE-ID--LOW
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Preference uncertainty, preference learning, and paired comparison experiments
David C. Kingsley; Thomas C. Brown
2010-01-01
Results from paired comparison experiments suggest that as respondents progress through a sequence of binary choices they become more consistent, apparently fine-tuning their preferences. Consistency may be indicated by the variance of the estimated valuation distribution measured by the error term in the random utility model. A significant reduction in the variance is...
Measuring Professional Identity Development among Counselor Trainees
ERIC Educational Resources Information Center
Prosek, Elizabeth A.; Hurt, Kara M.
2014-01-01
This study examined the differences in professional identity development between novice and advanced counselor trainees (N = 161). Multivariate analyses of variance indicated significant differences between groups. Specifically, advanced counselor trainees demonstrated greater professional development compared with novice counselor trainees. No…
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Value of biologic therapy: a forecasting model in three disease areas.
Paramore, L Clark; Hunter, Craig A; Luce, Bryan R; Nordyke, Robert J; Halbert, R J
2010-01-01
Forecast the return on investment (ROI) for advances in biologic therapies in years 2015 and 2030, based upon impact on disease prevalence, morbidity, and mortality for asthma, diabetes, and colorectal cancer. A deterministic, spreadsheet-based, forecasting model was developed based on trends in demographics and disease epidemiology. 'Return' was defined as reductions in disease burden (prevalence, morbidity, mortality) translated into monetary terms; 'investment' was defined as the incremental costs of biologic therapy advances. Data on disease prevalence, morbidity, mortality, and associated costs were obtained from government survey statistics or published literature. Expected impact of advances in biologic therapies was based on expert opinion. Gains in quality-adjusted life years (QALYs) were valued at $100,000 per QALY. The base case analysis, in which reductions in disease prevalence and mortality predicted by the expert panel are not considered, shows the resulting ROIs remain positive for asthma and diabetes but fall below $1 for colorectal cancer. Analysis involving expert panel predictions indicated positive ROI results for all three diseases at both time points, ranging from $207 for each incremental dollar spent on biologic therapies to treat asthma in 2030, to $4 for each incremental dollar spent on biologic therapies to treat colorectal cancer in 2015. If QALYs are not considered, the resulting ROIs remain positive for all three diseases at both time points. Society may expect substantial returns from investments in innovative biologic therapies. These benefits are most likely to be realized in an environment of appropriate use of new molecules. The potential variance between forecasted (from expert opinion) and actual future health outcomes could be significant. Similarly, the forecasted growth in use of biologic therapies relied upon unvalidated market forecasts.
YAMASHITA, KEISHI; SAKURAMOTO, SHINICHI; MIENO, HIROAKI; NEMOTO, MASAYUKI; SHIBATA, TOMOTAKA; KATADA, NATSUYA; OHTSUKI, SHIGEAKI; SAKAMOTO, YASUTOSHI; HOSHI, KEIKA; WANG, GUOQIN; HEMMI, OSAMU; SATOH, TOSHIHIKO; KIKUCHI, SHIRO; WATANABE, MASAHIKO
2015-01-01
Systemic abrogation of TGF-β signaling results in tumor reduction through cytotoxic T lymphocytes activity in a mouse model. The administration of polysaccharide-Kureha (PSK) into tumor-bearing mice also showed tumor regression with reduced TGF-β. However, there have been no studies regarding the PSK administration to cancer patients and the association with plasma TGF-β. PSK (3 g/day) was administered as a neoadjuvant therapy for 2 weeks before surgery. In total, 31 advanced gastric cancer (AGC) patients were randomly assigned to group A (no neoadjuvant PSK; n=14) or B (neoadjuvant PSK therapy; n=17). Plasma TGF-β was measured pre- and postoperatively. The allocation factors were clinical stage (cStage) and gender. Plasma TGF-β ranged from 1.85–43.5 ng/ml (average, 9.50 ng/ml) in AGC, and 12 patients (38.7%) had a high value, >7.0 ng/ml. These patients were largely composed of poorly-differentiated adenocarcinoma with pathological stage III/IV. All the six elevated cases in group B showed a significant reduction of plasma TGF-β (from 21.6 to 4.5 ng/ml, on average), whereas this was not exhibited in group A. The cases within the normal limits of TGF-β remained unchanged irrespective of PSK treatment. Analysis of variance showed a statistically significant reduction in the difference of plasma TGF-β between groups A and B (P=0.019). PSK reduced the plasma TGF-β in AGC patients when the levels were initially high. The clinical advantage of PSK may, however, be restricted to specific histological types of AGC. Perioperative suppression of TGF-β by PSK may antagonize cancer immune evasion and improve patient prognosis in cases of AGC. PMID:26137253
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Features of MCNP6 Relevant to Medical Radiation Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, H. Grady III; Goorley, John T.
2012-08-29
MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less
Mutilating Data and Discarding Variance: The Dangers of Dichotomizing Continuous Variables.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper reviews issues involved in converting continuous variables to nominal variables to be used in the OVA techniques. The literature dealing with the dangers of dichotomizing continuous variables is reviewed. First, the assumptions invoked by OVA analyses are reviewed in addition to concerns regarding the loss of variance and a reduction in…
Control Variates and Optimal Designs in Metamodeling
2013-03-01
27 2.4.5 Selection of Control Variates for Inclusion in Model...meet the normality assumption (Nelson 1990, Nelson and Yang 1992, Anonuevo and Nelson 1988). Jacknifing, splitting, and bootstrapping can be used to...freedom to estimate the variance are lost due to being used for the control variate inclusion . This means the variance reduction achieved must now be
Leading indicators of mosquito-borne disease elimination.
O'Regan, Suzanne M; Lillie, Jonathan W; Drake, John M
Mosquito-borne diseases contribute significantly to the global disease burden. High-profile elimination campaigns are currently underway for many parasites, e.g., Plasmodium spp., the causal agent of malaria. Sustaining momentum near the end of elimination programs is often difficult to achieve and consequently quantitative tools that enable monitoring the effectiveness of elimination activities after the initial reduction of cases has occurred are needed. Documenting progress in vector-borne disease elimination is a potentially important application for the theory of critical transitions. Non-parametric approaches that are independent of model-fitting would advance infectious disease forecasting significantly. In this paper, we consider compartmental Ross-McDonald models that are slowly forced through a critical transition through gradually deployed control measures. We derive expressions for the behavior of candidate indicators, including the autocorrelation coefficient, variance, and coefficient of variation in the number of human cases during the approach to elimination. We conducted a simulation study to test the performance of each summary statistic as an early warning system of mosquito-borne disease elimination. Variance and coefficient of variation were highly predictive of elimination but autocorrelation performed poorly as an indicator in some control contexts. Our results suggest that tipping points (bifurcations) in mosquito-borne infectious disease systems may be foreshadowed by characteristic temporal patterns of disease prevalence.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
MPF Top-Mast Measured Temperature
1997-10-14
This temperature figure shows the change in the mean and variance of the temperature fluctuations at the Pathfinder landing site. Sol 79 and 80 are very similar, with a significant reduction of the mean and variance on Sol 81. The science team suspects that a cold front has past of the landing sight between Sols 80 and 81. http://photojournal.jpl.nasa.gov/catalog/PIA00978
A Framework for Analyzing Biometric Template Aging and Renewal Prediction
2009-03-01
databases has sufficient data to support template aging over an extended period of time. Another assumption is that there is significant variance to...mentioned above for enrollment also apply to verification. When combining enrollment and verification, there is a significant amount of variance that... significant advancement in the biometrics body of knowledge. This research presents the CTARP Framework, a novel foundational framework for methods of
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
Word Durations in Non-Native English
Baker, Rachel E.; Baese-Berk, Melissa; Bonnasse-Gahot, Laurent; Kim, Midam; Van Engen, Kristin J.; Bradlow, Ann R.
2010-01-01
In this study, we compare the effects of English lexical features on word duration for native and non-native English speakers and for non-native speakers with different L1s and a range of L2 experience. We also examine whether non-native word durations lead to judgments of a stronger foreign accent. We measured word durations in English paragraphs read by 12 American English (AE), 20 Korean, and 20 Chinese speakers. We also had AE listeners rate the `accentedness' of these non-native speakers. AE speech had shorter durations, greater within-speaker word duration variance, greater reduction of function words, and less between-speaker variance than non-native speech. However, both AE and non-native speakers showed sensitivity to lexical predictability by reducing second mentions and high frequency words. Non-native speakers with more native-like word durations, greater within-speaker word duration variance, and greater function word reduction were perceived as less accented. Overall, these findings identify word duration as an important and complex feature of foreign-accented English. PMID:21516172
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Discrete filtering techniques applied to sequential GPS range measurements
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1987-01-01
The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.
Analytic variance estimates of Swank and Fano factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction
NASA Astrophysics Data System (ADS)
Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng
Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Craig, Kellie D.
2011-01-01
The intent of the Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) effort is to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Key Concepts (1) Offerors must propose an Advanced Booster concept that meets SLS Program requirements (2) Engineering Demonstration and/or Risk Reduction must relate to the Offeror s Advanced Booster concept (3) NASA Research Announcement (NRA) will not be prescriptive in defining Engineering Demonstration and/or Risk Reduction
Berger, Philip; Messner, Michael J; Crosby, Jake; Vacs Renwick, Deborah; Heinrich, Austin
2018-05-01
Spore reduction can be used as a surrogate measure of Cryptosporidium natural filtration efficiency. Estimates of log10 (log) reduction were derived from spore measurements in paired surface and well water samples in Casper Wyoming and Kearney Nebraska. We found that these data were suitable for testing the hypothesis (H 0 ) that the average reduction at each site was 2 log or less, using a one-sided Student's t-test. After establishing data quality objectives for the test (expressed as tolerable Type I and Type II error rates), we evaluated the test's performance as a function of the (a) true log reduction, (b) number of paired samples assayed and (c) variance of observed log reductions. We found that 36 paired spore samples are sufficient to achieve the objectives over a wide range of variance, including the variances observed in the two data sets. We also explored the feasibility of using smaller numbers of paired spore samples to supplement bioparticle counts for screening purposes in alluvial aquifers, to differentiate wells with large volume surface water induced recharge from wells with negligible surface water induced recharge. With key assumptions, we propose a normal statistical test of the same hypothesis (H 0 ), but with different performance objectives. As few as six paired spore samples appear adequate as a screening metric to supplement bioparticle counts to differentiate wells in alluvial aquifers with large volume surface water induced recharge. For the case when all available information (including failure to reject H 0 based on the limited paired spore data) leads to the conclusion that wells have large surface water induced recharge, we recommend further evaluation using additional paired biweekly spore samples. Published by Elsevier GmbH.
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Raw water quality range &considerations 1 1. Ion exchange (IE) (a) Intermediate All ground waters. 2...-filtration. 5. Lime softening (d) Advanced All waters. 6. Green sand filtration (e) Basic. 7. Co-precipitation with barium sulfate (f) Intermediate to Advanced Ground waters with suitable water quality. 8...
Describing Chinese hospital activity with diagnosis related groups (DRGs). A case study in Chengdu.
Gong, Zhiping; Duckett, Stephen J; Legge, David G; Pei, Likun
2004-07-01
To examine the applicability of an Australian casemix classification system to the description of Chinese hospital activity. A total of 161,478 inpatient episodes from three Chengdu hospitals with demographic, diagnosis, procedure and billing data for the year 1998/1999, 1999/2000 and 2000/2001 were grouped using the Australian refined-diagnosis related groups (AR-DRGs) (version 4.0) grouper. Reduction in variance (R2) and coefficient of variation (CV). Untrimmed reduction in variance (R2) was 0.12 and 0.17 for length of stay (LOS) and cost respectively. After trimming, R2 values were 0.45 and 0.59 for length of stay and cost respectively. The Australian refined DRGs provide a good basis for developing a Chinese grouper.
[Locked volar plating for complex distal radius fractures: maintaining radial length].
Jeudy, J; Pernin, J; Cronier, P; Talha, A; Massin, P
2007-09-01
Maintaining radial length, likely to be the main challenge in the treatment of complex distal radius fractures, is necessary for complete grip-strength and pro-supination range recovery. In spite of frequent secondary displacements, bridging external-fixation has remained the reference method, either isolated or in association with additional percutaneous pins or volar plating. Also, there seems to be a relation between algodystrophy and the duration of traction applied on the radio-carpal joint. Fixed-angle volar plating offers the advantage of maintaining the reduction until fracture healing, without bridging the joint. In a prospective study, forty-three consecutive fractures of the distal radius with a positivated ulnar variance were treated with open reduction and fixed-angle volar plating. Results were assessed with special attention to the radial length and angulation obtained and maintained throughout treatment, based on repeated measurements of the ulnar variance and radial angulation in the first six months postoperatively. The correction of the ulnar variance was maintained until complete recovery, independently of initial metaphyseal comminution, and of the amount of radial length gained at reduction. Only 3 patients lost more than 1 mm of radial length after reduction. The posterior tilt of the distal radial epiphysis was incompletely reduced in 13 cases, whereas reduction was partially lost in 6 elderly osteoporotic female patients. There was 8 articular malunions, all of them less than 2 mm. Secondary displacements were found to be related to a deficient locking technique. Eight patients developed an algodystropy. The risk factors for algodystrophy were articular malunion, associated posterior pining, and associated lesions of the ipsilateral upper limb. Provided that the locking technique was correct, this type of fixation appeared efficient in maintaining the radial length in complex fractures of the distal radius. The main challenge remains the reduction of displaced articular fractures. Based on these results, it is not possible to conclude that this method is superior to external fixation.
ERIC Educational Resources Information Center
Thomas, L. M.; Thomas, Suzanne G.
This obtrusive post-hoc quasi-experimental study investigated Scholastic Assessment Test (SAT) scores of 111 high school students in grades 10 through 12. Fifty-three students were enrolled in at least one Advanced Placement (AP) course at the time of the study. General factorial analysis of variance (ANOVA) tested for significant differences…
Schiebener, Johannes; Brand, Matthias
2017-06-01
Previous literature has explained older individuals' disadvantageous decision-making under ambiguity in the Iowa Gambling Task (IGT) by reduced emotional warning signals preceding decisions. We argue that age-related reductions in IGT performance may also be explained by reductions in certain cognitive abilities (reasoning, executive functions). In 210 participants (18-86 years), we found that the age-related variance on IGT performance occurred only in the last 60 trials. The effect was mediated by cognitive abilities and their relation with decision-making performance under risk with explicit rules (Game of Dice Task). Thus, reductions in cognitive functions in older age may be associated with both a reduced ability to gain explicit insight into the rules of the ambiguous decision situation and with failure to choose the less risky options consequently after the rules have been understood explicitly. Previous literature may have underestimated the relevance of cognitive functions for age-related decline in decision-making performance under ambiguity.
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.
2016-02-01
Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
2008-09-15
however, a variety of so-called variance-reduction techniques ( VRTs ) that have been developed, which reduce output variance with little or no...additional computational effort. VRTs typically achieve this via judicious and careful reuse of the basic underlying random nmnbers. Perhaps the best-known...typical simulation situation- change a weapons-system configuration and see what difference it makes). Key to making CRN and most other VRTs work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
Uechi, Ken; Asakura, Keiko; Masayasu, Shizuko; Sasaki, Satoshi
2017-06-01
Salt intake in Japan remains high; therefore, exploring within-country variation in salt intake and its cause is an important step in the establishment of salt reduction strategies. However, no nationwide evaluation of this variation has been conducted by urinalysis. We aimed to clarify whether within-country variation in salt intake exists in Japan after adjusting for individual characteristics. Healthy men (n=1027) and women (n=1046) aged 20-69 years were recruited from all 47 prefectures of Japan. Twenty-four-hour sodium excretion was estimated using three spot urine samples collected on three nonconsecutive days. The study area was categorized into 12 regions defined by the National Health and Nutrition Survey Japan. Within-country variation in sodium excretion was estimated as a population (region)-level variance using a multilevel model with random intercepts, with adjustment for individual biological, socioeconomic and dietary characteristics. Estimated 24 h sodium excretion was 204.8 mmol per day in men and 155.7 mmol per day in women. Sodium excretion was high in the Northeastern region. However, population-level variance was extremely small after adjusting for individual characteristics (0.8 and 2% of overall variance in men and women, respectively) compared with individual-level variance (99.2 and 98% of overall variance in men and women, respectively). Among individual characteristics, greater body mass index, living with a spouse and high miso-soup intake were associated with high sodium excretion in both sexes. Within-country variation in salt intake in Japan was extremely small compared with individual-level variation. Salt reduction strategies for Japan should be comprehensive and should not address the small within-country differences in intake.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Litterini, Amy J; Fieler, Vickie K; Cavanaugh, James T; Lee, Jeannette Q
2013-12-01
To compare the effects of resistance and cardiovascular exercise on functional mobility in individuals with advanced cancer. Prospective, 2-group pretest-posttest pilot study with randomization to either resistance or cardiovascular exercise mode. Comprehensive community cancer center and a hospital-based fitness facility. Volunteer sample of individuals (N=66; 30 men; 36 women; mean age, 62y) with advanced cancer recruited through the cancer center, palliative care service, rehabilitation department, and a local hospice. Ten weeks of individualized resistance or cardiovascular exercise, prescribed and monitored by oncology-trained exercise personnel. Functional mobility was assessed using the Short Physical Performance Battery (SPPB); self-reported pain and fatigue were assessed secondarily using visual analog scales. Data were analyzed using a split plot 2×2 analysis of variance (α=.05). Fifty-two patients (78.8%) completed the study: 23 (67.7%) of 34 patients in the resistance arm and 29 (90.6%) of 32 patients in the cardiovascular arm. No participant withdrew because of study adverse events. Ten-week outcomes (n=52) included a significant increase in SPPB total score (P<.001), increase in gait speed (P=.001), and reduction in fatigue (P=.05). Although cardiovascular exercise participants had a modestly greater improvement in SPPB total score than resistance training participants (F1,49=4.21, P=.045), the difference was not confirmed in a subsequent intention-to-treat analysis (N=66). Individuals with advanced cancer appear to benefit from exercise for improving functional mobility. Neither resistance nor cardiovascular exercise appeared to have a strong differential effect on outcome. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf
2012-01-01
Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.
NASA Technical Reports Server (NTRS)
Nickol, Craig L.; Haller, William J.
2016-01-01
NASA's Environmentally Responsible Aviation (ERA) project has matured technologies to enable simultaneous reductions in fuel burn, noise, and nitrogen oxide (NOx) emissions for future subsonic commercial transport aircraft. The fuel burn reduction target was a 50% reduction in block fuel burn (relative to a 2005 best-in-class baseline aircraft), utilizing technologies with an estimated Technology Readiness Level (TRL) of 4-6 by 2020. Progress towards this fuel burn reduction target was measured through the conceptual design and analysis of advanced subsonic commercial transport concepts spanning vehicle size classes from regional jet (98 passengers) to very large twin aisle size (400 passengers). Both conventional tube-and-wing (T+W) concepts and unconventional (over-wing-nacelle (OWN), hybrid wing body (HWB), mid-fuselage nacelle (MFN)) concepts were developed. A set of propulsion and airframe technologies were defined and integrated onto these advanced concepts which were then sized to meet the baseline mission requirements. Block fuel burn performance was then estimated, resulting in reductions relative to the 2005 best-in-class baseline performance ranging from 39% to 49%. The advanced single-aisle and large twin aisle T+W concepts had reductions of 43% and 41%, respectively, relative to the 737-800 and 777-200LR aircraft. The single-aisle OWN concept and the large twin aisle class HWB concept had reductions of 45% and 47%, respectively. In addition to their estimated fuel burn reduction performance, these unconventional concepts have the potential to provide significant noise reductions due, in part, to engine shielding provided by the airframe. Finally, all of the advanced concepts also have the potential for significant NOx emissions reductions due to the use of advanced combustor technology. Noise and NOx emissions reduction estimates were also generated for these concepts as part of the ERA project.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
Dynamic Repertoire of Intrinsic Brain States Is Reduced in Propofol-Induced Unconsciousness
Liu, Xiping; Pillay, Siveshigan
2015-01-01
Abstract The richness of conscious experience is thought to scale with the size of the repertoire of causal brain states, and it may be diminished in anesthesia. We estimated the state repertoire from dynamic analysis of intrinsic functional brain networks in conscious sedated and unconscious anesthetized rats. Functional resonance images were obtained from 30-min whole-brain resting-state blood oxygen level-dependent (BOLD) signals at propofol infusion rates of 20 and 40 mg/kg/h, intravenously. Dynamic brain networks were defined at the voxel level by sliding window analysis of regional homogeneity (ReHo) or coincident threshold crossings (CTC) of the BOLD signal acquired in nine sagittal slices. The state repertoire was characterized by the temporal variance of the number of voxels with significant ReHo or positive CTC. From low to high propofol dose, the temporal variances of ReHo and CTC were reduced by 78%±20% and 76%±20%, respectively. Both baseline and propofol-induced reduction of CTC temporal variance increased from lateral to medial position. Group analysis showed a 20% reduction in the number of unique states at the higher propofol dose. Analysis of temporal variance in 12 anatomically defined regions of interest predicted that the largest changes occurred in visual cortex, parietal cortex, and caudate-putamen. The results suggest that the repertoire of large-scale brain states derived from the spatiotemporal dynamics of intrinsic networks is substantially reduced at an anesthetic dose associated with loss of consciousness. PMID:24702200
The influence of local spring temperature variance on temperature sensitivity of spring phenology.
Wang, Tao; Ottlé, Catherine; Peng, Shushi; Janssens, Ivan A; Lin, Xin; Poulter, Benjamin; Yue, Chao; Ciais, Philippe
2014-05-01
The impact of climate warming on the advancement of plant spring phenology has been heavily investigated over the last decade and there exists great variability among plants in their phenological sensitivity to temperature. However, few studies have explicitly linked phenological sensitivity to local climate variance. Here, we set out to test the hypothesis that the strength of phenological sensitivity declines with increased local spring temperature variance, by synthesizing results across ground observations. We assemble ground-based long-term (20-50 years) spring phenology database (PEP725 database) and the corresponding climate dataset. We find a prevalent decline in the strength of phenological sensitivity with increasing local spring temperature variance at the species level from ground observations. It suggests that plants might be less likely to track climatic warming at locations with larger local spring temperature variance. This might be related to the possibility that the frost risk could be higher in a larger local spring temperature variance and plants adapt to avoid this risk by relying more on other cues (e.g., high chill requirements, photoperiod) for spring phenology, thus suppressing phenological responses to spring warming. This study illuminates that local spring temperature variance is an understudied source in the study of phenological sensitivity and highlight the necessity of incorporating this factor to improve the predictability of plant responses to anthropogenic climate change in future studies. © 2013 John Wiley & Sons Ltd.
Boukid, Fatma; Prandi, Barbara; Sforza, Stefano; Sayar, Rhouma; Seo, Yong Weon; Mejri, Mondher; Yacoubi, Ines
2017-07-19
Baker's asthma is a serious airway disease triggered by wheat protein CM3 α-amylase/trypsin inhibitor. The purpose of the present study was to investigate the impact of genotype and crop year on allergen CM3 α-amylase/trypsin inhibitor associated with baker's asthma. A historical series of Tunisian durum wheat (100 accessions), derived from three crop years, was used to compare the amount of CM3 from landraces to advanced cultivars. CM3 protein quantification was assessed after an enzymatic cleavage of the soluble protein extracts on a UPLC/ESI-MS system, using a marker peptide for its quantification. Combined data analysis of variance revealed an important effect of genotype, crop year, and their interaction. The CM3 allergenic proteins were found to significantly vary among studied genotypes, as confirmed by genetic variability, coefficient of variance, heritability, and genetic advance.
An Evolutionary Perspective on Epistasis and the Missing Heritability
Hemani, Gibran; Knott, Sara; Haley, Chris
2013-01-01
The relative importance between additive and non-additive genetic variance has been widely argued in quantitative genetics. By approaching this question from an evolutionary perspective we show that, while additive variance can be maintained under selection at a low level for some patterns of epistasis, the majority of the genetic variance that will persist is actually non-additive. We propose that one reason that the problem of the “missing heritability” arises is because the additive genetic variation that is estimated to be contributing to the variance of a trait will most likely be an artefact of the non-additive variance that can be maintained over evolutionary time. In addition, it can be shown that even a small reduction in linkage disequilibrium between causal variants and observed SNPs rapidly erodes estimates of epistatic variance, leading to an inflation in the perceived importance of additive effects. We demonstrate that the perception of independent additive effects comprising the majority of the genetic architecture of complex traits is biased upwards and that the search for causal variants in complex traits under selection is potentially underpowered by parameterising for additive effects alone. Given dense SNP panels the detection of causal variants through genome-wide association studies may be improved by searching for epistatic effects explicitly. PMID:23509438
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Symmetry-Based Variance Reduction Applied to 60Co Teletherapy Unit Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, D.
A new variance reduction technique (VRT) is implemented in the BEAM code [1] to specifically improve the efficiency of calculating penumbral distributions of in-air fluence profiles calculated for isotopic sources. The simulations focus on 60Co teletherapy units. The VRT includes splitting of photons exiting the source capsule of a 60Co teletherapy source according to a splitting recipe and distributing the split photons randomly on the periphery of a circle, preserving the direction cosine along the beam axis, in addition to the energy of the photon. It is shown that the use of the VRT developed in this work can lead to a 6-9 fold improvement in the efficiency of the penumbral photon fluence of a 60Co beam compared to that calculated using the standard optimized BEAM code [1] (i.e., one with the proper selection of electron transport parameters).
Cerebellar malformations alter regional cerebral development.
Bolduc, Marie-Eve; Du Plessis, Adre J; Evans, Alan; Guizard, Nicolas; Zhang, Xun; Robertson, Richard L; Limperopoulos, Catherine
2011-12-01
The aim of this study was to compare total and regional cerebral volumes in children with isolated cerebellar malformations (CBMs) with those in typically developing children, and to examine the extent to which cerebellar volumetric reductions are associated with total and regional cerebral volumes. This is a case-control study of children diagnosed with isolated CBMs. Each child was matched on age and sex to two typically developing children. Using advanced three-dimensional volumetric magnetic resonance imaging, the cerebrum was segmented into tissue classes and partitioned into eight regions. Analysis of variance was used to compare cerebral volumes between children with CBMs and control children, and linear regressions to examine the impact of cerebellar volume reduction on cerebral volumes. Magnetic resonance imaging was performed at a mean age of 27 months in 20 children (10 males, 10 females) with CBMs and 40 typically developing children. Children with CBMs showed significantly smaller deep grey matter nuclei (p < 0.001), subgenual white matter (p = 0.03), midtemporal white matter (p = 0.02), and inferior occipital grey matter (p = 0.03) volumes than typically developing children. Greater cerebellar volumetric reduction in children with CBMs was associated with decreased total cerebral volume and deep grey matter nuclei (p = 0.02), subgenual white/grey matter (p = 0.001), midtemporal white (p = 0.02) and grey matter (p = 0.01), and parieto-occipital grey matter (p = 0.004). CBMs are associated with impaired regional cerebral growth, suggesting deactivation of principal cerebello-cerebral pathways. © The Authors. Developmental Medicine & Child Neurology © 2011 Mac Keith Press.
Variability, trends, and predictability of seasonal sea ice retreat and advance in the Chukchi Sea
NASA Astrophysics Data System (ADS)
Serreze, Mark C.; Crawford, Alex D.; Stroeve, Julienne C.; Barrett, Andrew P.; Woodgate, Rebecca A.
2016-10-01
As assessed over the period 1979-2014, the date that sea ice retreats to the shelf break (150 m contour) of the Chukchi Sea has a linear trend of -0.7 days per year. The date of seasonal ice advance back to the shelf break has a steeper trend of about +1.5 days per year, together yielding an increase in the open water period of 80 days. Based on detrended time series, we ask how interannual variability in advance and retreat dates relate to various forcing parameters including radiation fluxes, temperature and wind (from numerical reanalyses), and the oceanic heat inflow through the Bering Strait (from in situ moorings). Of all variables considered, the retreat date is most strongly correlated (r ˜ 0.8) with the April through June Bering Strait heat inflow. After testing a suite of statistical linear models using several potential predictors, the best model for predicting the date of retreat includes only the April through June Bering Strait heat inflow, which explains 68% of retreat date variance. The best model predicting the ice advance date includes the July through September inflow and the date of retreat, explaining 67% of advance date variance. We address these relationships by discussing heat balances within the Chukchi Sea, and the hypothesis of oceanic heat transport triggering ocean heat uptake and ice-albedo feedback. Developing an operational prediction scheme for seasonal retreat and advance would require timely acquisition of Bering Strait heat inflow data. Predictability will likely always be limited by the chaotic nature of atmospheric circulation patterns.
Lowthian, P; Disler, P; Ma, S; Eagar, K; Green, J; de Graaff, S
2000-10-01
To investigate whether the Australian National Sub-acute and Non-acute Patient Casemix Classification (SNAP) and Functional Independence Measure and Functional Related Group (Version 2) (FIM-FRG2) casemix systems can be used to predict functional outcome, and reduce the variance of length of stay (LOS) of patients undergoing rehabilitation after strokes. The study comprised a retrospective analysis of the records of patients admitted to the Cedar Court Healthsouth Rehabilitation Hospital for rehabilitation after stroke. The sample included 547 patients (83.3% of those admitted with stroke during this period). Patient data were stratified for analysis into the five SNAP or nine FIM-FRG2 groups, on the basis of the admission FIM scores and age. The AN-SNAP classification accounted for a 30.7% reduction of the variance of LOS, and 44.2% of motor FIM, and the FIM-FRG2 accounts for 33.5% and 56.4% reduction respectively. Comparison of the Cedar Court with the national AN-SNAP data showed differences in the LOS and functional outcomes of older, severely disabled patients. Intensive rehabilitation in selected patients of this type appears to have positive effects, albeit with a slightly longer period of inpatient rehabilitation. Casemix classifications can be powerful management tools. Although FIM-FRG2 accounts for more reduction in variance than SNAP, division into nine groups meant that some contained few subjects. This paper supports the introduction of AN-SNAP as the standard casemix tool for rehabilitation in Australia, which will hopefully lead to rational, adequate funding of the rehabilitation phase of care.
Advanced Subsonic Airplane Design and Economic Studies
NASA Technical Reports Server (NTRS)
Liebeck, Robert H.; Andrastek, Donald A.; Chau, Johnny; Girvin, Raquel; Lyon, Roger; Rawdon, Blaine K.; Scott, Paul W.; Wright, Robert A.
1995-01-01
A study was made to examine the effect of advanced technology engines on the performance of subsonic airplanes and provide a vision of the potential which these advanced engines offered. The year 2005 was selected as the entry-into-service (EIS) date for engine/airframe combination. A set of four airplane classes (passenger and design range combinations) that were envisioned to span the needs for the 2005 EIS period were defined. The airframes for all classes were designed and sized using 2005 EIS advanced technology. Two airplanes were designed and sized for each class: one using current technology (1995) engines to provide a baseline, and one using advanced technology (2005) engines. The resulting engine/airframe combinations were compared and evaluated on the basis on sensitivity to basic engine performance parameters (e.g. SFC and engine weight) as well as DOC+I. The advanced technology engines provided significant reductions in fuel burn, weight, and wing area. Average values were as follows: reduction in fuel burn = 18%, reduction in wing area = 7%, and reduction in TOGW = 9%. Average DOC+I reduction was 3.5% using the pricing model based on payload-range index and 5% using the pricing model based on airframe weight. Noise and emissions were not considered.
Relationship between extrinsic factors and the acromio-humeral distance.
Mackenzie, Tanya Anne; Herrington, Lee; Funk, Lenard; Horsley, Ian; Cools, Ann
2016-06-01
Maintenance of the subacromial space is important in impingement syndromes. Research exploring the correlation between biomechanical factors and the subacromial space would be beneficial. To establish if relationship exists between the independent variables of scapular rotation, shoulder internal rotation, shoulder external rotation, total arc of shoulder rotation, pectoralis minor length, thoracic curve, and shoulder activity level with the dependant variables: AHD in neutral, AHD in 60° arm abduction, and percentage reduction in AHD. Controlled laboratory study. Data from 72 male control shoulders (24.28years STD 6.81 years) and 186 elite sportsmen's shoulders (25.19 STD 5.17 years) were included in the analysis. The independent variables were quantified and real time ultrasound was used to measure the dependant variable acromio-humeral distance. Shoulder internal rotation and pectoralis minor length, explained 8% and 6% respectively of variance in acromio-humeral distance in neutral. Pectoralis minor length accounted for 4% of variance in 60° arm abduction. Total arc of rotation, shoulder external rotation range, and shoulder activity levels explained 9%, 15%, and 16%-29% of variance respectively in percentage reduction in acromio-humeral distance during arm abduction to 60°. Pectorals minor length, shoulder rotation ranges, total arc of shoulder rotation, and shoulder activity levels were found to have weak to moderate relationships with acromio-humeral distance. Existence and strength of relationship was population specific and dependent on arm position. Relationships only accounted for small variances in AHD indicating that in addition to these factors there are other factors involved in determining AHD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
The key kinematic determinants of undulatory underwater swimming at maximal velocity.
Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross
2016-01-01
The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.
Daugherty, Ana M.; Bender, Andrew R.; Yuan, Peng; Raz, Naftali
2016-01-01
Impairment of hippocampus-dependent cognitive processes has been proposed to underlie age-related deficits in navigation. Animal studies suggest a differential role of hippocampal subfields in various aspects of navigation, but that hypothesis has not been tested in humans. In this study, we examined the association between volume of hippocampal subfields and age differences in virtual spatial navigation. In a sample of 65 healthy adults (age 19–75 years), advanced age was associated with a slower rate of improvement operationalized as shortening of the search path over 25 learning trials on a virtual Morris water maze task. The deficits were partially explained by greater complexity of older adults' search paths. Larger subiculum and entorhinal cortex volumes were associated with a faster decrease in search path complexity, which in turn explained faster shortening of search distance. Larger Cornu Ammonis (CA)1–2 volume was associated with faster distance shortening, but not in path complexity reduction. Age differences in regional volumes collectively accounted for 23% of the age-related variance in navigation learning. Independent of subfield volumes, advanced age was associated with poorer performance across all trials, even after reaching the asymptote. Thus, subiculum and CA1–2 volumes were associated with speed of acquisition, but not magnitude of gains in virtual maze navigation. PMID:25838036
Derived variants at six genes explain nearly half of size reduction in dog breeds.
Rimbault, Maud; Beale, Holly C; Schoenebeck, Jeffrey J; Hoopes, Barbara C; Allen, Jeremy J; Kilroy-Glynn, Paul; Wayne, Robert K; Sutter, Nathan B; Ostrander, Elaine A
2013-12-01
Selective breeding of dogs by humans has generated extraordinary diversity in body size. A number of multibreed analyses have been undertaken to identify the genetic basis of this diversity. We analyzed four loci discovered in a previous genome-wide association study that used 60,968 SNPs to identify size-associated genomic intervals, which were too large to assign causative roles to genes. First, we performed fine-mapping to define critical intervals that included the candidate genes GHR, HMGA2, SMAD2, and STC2, identifying five highly associated markers at the four loci. We hypothesize that three of the variants are likely to be causative. We then genotyped each marker, together with previously reported size-associated variants in the IGF1 and IGF1R genes, on a panel of 500 domestic dogs from 93 breeds, and identified the ancestral allele by genotyping the same markers on 30 wild canids. We observed that the derived alleles at all markers correlated with reduced body size, and smaller dogs are more likely to carry derived alleles at multiple markers. However, breeds are not generally fixed at all markers; multiple combinations of genotypes are found within most breeds. Finally, we show that 46%-52.5% of the variance in body size of dog breeds can be explained by seven markers in proximity to exceptional candidate genes. Among breeds with standard weights <41 kg (90 lb), the genotypes accounted for 64.3% of variance in weight. This work advances our understanding of mammalian growth by describing genetic contributions to canine size determination in non-giant dog breeds.
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits
NASA Technical Reports Server (NTRS)
Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.
2005-01-01
This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.
Infant Visual Expectations: Advances and Issues.
ERIC Educational Resources Information Center
Haith, Marshall M.; Wass, Tara S.; Adler, Scott A.
1997-01-01
Speculates on underlying processes for the reaction time variance and age differences in anticipation latency using the Visual Expectation Paradigm. Discusses the dichotomization of reactive and anticipatory behavior, limitations of longitudinal designs, drawbacks in using standard procedures and materials, and inferences that can be made…
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Vijay; Denton, David; SHarma, Pradeep
The key objective for this project was to evaluate the potential to achieve substantial reductions in the production cost of H 2-rich syngas via coal gasification with near-zero emissions due to the cumulative and synergistic benefits realized when multiple advanced technologies are integrated into the overall conversion process. In this project, Aerojet Rocketdyne’s (AR’s) advanced gasification technology (currently being offered as R-GAS™) and RTI International’s (RTI’s) advanced warm syngas cleanup technologies were evaluated via a number of comparative techno-economic case studies. AR’s advanced gasification technology consists of a dry solids pump and a compact gasifier system. Based on the uniquemore » design of this gasifier, it has been shown to reduce the capital cost of the gasification block by between 40 and 50%. At the start of this project, actual experimental work had been demonstrated through pilot plant systems for both the gasifier and dry solids pump. RTI’s advanced warm syngas cleanup technologies consist primarily of RTI’s Warm Gas Desulfurization Process (WDP) technology, which effectively allows decoupling of the sulfur and CO 2 removal allowing for more flexibility in the selection of the CO 2 removal technology, plus associated advanced technologies for direct sulfur recovery and water gas shift (WGS). WDP has been demonstrated at pre-commercial scale using an activated amine carbon dioxide recovery process which would not have been possible if a majority of the sulfur had not been removed from the syngas by WDP. This pre-commercial demonstration of RTI’s advanced warm syngas cleanup system was conducted in parallel to the activities on this project. The technical data and cost information from this pre-commercial demonstration were extensively used in this project during the techno-economic analysis. With this project, both of RTI’s advanced WGS technologies were investigated. Because RT’s advanced fixed-bed WGS (AFWGS) process was successfully implemented in the WDP pre-commercial demonstration test mentioned above, this technology was used as part of RTI’s advanced warm syngas technology package for the techno-economic analyses for this project. RTI’s advanced transport-reactor-based WGS (ATWGS) process was still conceptual at the start of this project, but one of the tasks for this project was to evaluate the technical feasibility of this technology. In each of the three application-based comparison studies conducted as part of this project, the reference case was based on an existing Department of Energy National Energy Technology Laboratory (DOE/NETL) system study. Each of these references cases used existing commercial technology and the system resulted in > 90% carbon capture. In the comparison studies for the use of the hydrogen-rich syngas generated in either an Integrated Gasification Combined Cycle (IGCC) or a Coal-to-Methanol (CTM) plant, the comparison cases consisted of the reference case, a case with the integration of each individual advanced technology (either AR or RTI), and finally a case with the integration of all the advanced technologies (AR and RTI combined). In the Coal-to-Liquids (CTL) comparison study, the comparison study consisted of only three cases, which included a reference case, a case with just RTI’s advanced syngas cleaning technology, and a case with AR’s and RTI’s advanced technologies. The results from these comparison studies showed that the integration of the advanced technologies did result in substantial benefits, and by far the greatest benefits were achieved for cases integrating all the advanced technologies. For the IGCC study, the fully integrated case resulted in a 1.4% net efficiency improvement, an 18% reduction in capital cost per kW of capacity, a 12% reduction in the operating cost per kWh, and a 75–79% reduction in sulfur emissions. For the CTM case, the fully integrated plant resulted in a 22% reduction in capital cost, a 13% reduction in operating costs, a > 99% net reduction in sulfur emissions, and a reduction of 13–15% in CO 2 emissions. Because the capital cost represents over 60% of the methanol Required Selling Price (RSP), the significant reduction in the capital cost for the advanced technology case resulted in an 18% reduction in methanol RSP. For the CTL case, the fully integrated plant resulted in a 16% reduction in capital cost, which represented a 13% reduction in diesel RSP. Finally, the technical feasibility analysis of RTI’s ATWGS process demonstrated that a fluid-bed catalyst with sufficient attrition resistance and WGS activity could be made and that the process achieved about a 24% reduction in capital cost compared to a conventional fixed-bed commercial process.« less
Response Monitoring and Adjustment: Differential Relations with Psychopathic Traits
Bresin, Konrad; Finy, M. Sima; Sprague, Jenessa; Verona, Edelyn
2014-01-01
Studies on the relation between psychopathy and cognitive functioning often show mixed results, partially because different factors of psychopathy have not been considered fully. Based on previous research, we predicted divergent results based on a two-factor model of psychopathy (interpersonal-affective traits and impulsive-antisocial traits). Specifically, we predicted that the unique variance of interpersonal-affective traits would be related to increased monitoring (i.e., error-related negativity) and adjusting to errors (i.e., post-error slowing), whereas impulsive-antisocial traits would be related to reductions in these processes. Three studies using a diverse selection of assessment tools, samples, and methods are presented to identify response monitoring correlates of the two main factors of psychopathy. In Studies 1 (undergraduates), 2 (adolescents), and 3 (offenders), interpersonal-affective traits were related to increased adjustment following errors and, in Study 3, to enhanced monitoring of errors. Impulsive-antisocial traits were not consistently related to error adjustment across the studies, although these traits were related to a deficient monitoring of errors in Study 3. The results may help explain previous mixed findings and advance implications for etiological models of psychopathy. PMID:24933282
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Sakamoto, Sadanori; Iguchi, Masaki
2018-06-08
Less attention to a balance task reduces the center of foot pressure (COP) variability by automating the task. However, it is not fully understood how the degree of postural automaticity influences the voluntary movement and anticipatory postural adjustments. Eleven healthy young adults performed a bipedal, eyes closed standing task under the three conditions: Control (C, standing task), Single (S, standing + reaction tasks), and Dual (D, standing + reaction + mental tasks). The reaction task was flexing the right shoulder to an auditory stimulus, which causes counter-clockwise rotational torque, and the mental task was arithmetic task. The COP variance before the reaction task was reduced in the D condition compared to that in the C and S conditions. On average the onsets of the arm movement and the vertical torque (Tz, anticipatory clockwise rotational torque) were both delayed, and the maximal Tz slope (the rate at which the torque develops) became less steep in the D condition compared to those in the S condition. When these data in the D condition were expressed as a percentage of those in the S condition, the arm movement onset and the Tz slope were positively and negatively, respectively, correlated with the COP variance. By using the mental-task induced COP variance reduction as the indicator of postural automaticity, our data suggest that the balance task for those with more COP variance reduction is less cognitively demanding, leading to the shorter reaction time probably due to the attention shift from the automated balance task to the reaction task. Copyright © 2018 Elsevier B.V. All rights reserved.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Hersoug, Anne Grete
2004-12-01
My first focus of this study was to explore therapists' personal characteristics as predictors of the proportion of interpretation in brief dynamic psychotherapy (N=39; maximum 40 sessions). In this study, I used data from the Norwegian Multicenter Study on Process and Outcome of Psychotherapy (1995). The main finding was that therapists who had experienced good parental care gave less interpretation (28% variance was accounted for). Therapists who had more negative introjects used a higher proportion of interpretation (16% variance was accounted for). Patients' pretreatment characteristics were not predictive of therapists' use of interpretation. The second focus was to investigate the impact of therapists' personality and the proportion of interpretation on the development of patients' maladaptive defensive functioning over the course of therapy. Better parental care and less negative introjects in therapists were associated with a positive influence and accounted for 5% variance in the reduction of patients' maladaptive defense.
Two proposed convergence criteria for Monte Carlo solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Pederson, S.P.; Booth, T.E.
1992-01-01
The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less
Introductory Guide to the Statistics of Molecular Genetics
ERIC Educational Resources Information Center
Eley, Thalia C.; Rijsdijk, Fruhling
2005-01-01
Background: This introductory guide presents the main two analytical approaches used by molecular geneticists: linkage and association. Methods: Traditional linkage and association methods are described, along with more recent advances in methodologies such as those using a variance components approach. Results: New methods are being developed all…
An Alternative to Ability Grouping
ERIC Educational Resources Information Center
Tomlinson, Carol Ann
2006-01-01
Ability grouping is a common approach to dealing with student variance in learning. In general, findings suggest that such an approach to dealing with student differences is disadvantageous to students who struggle in school and advantageous to advanced learners. The concept of differentiation suggests that there is another alternative to…
Technologies and Concepts for Reducing the Fuel Burn of Subsonic Transport Aircraft
NASA Technical Reports Server (NTRS)
Nickol, Craig L.
2012-01-01
There are many technologies under development that have the potential to enable large fuel burn reductions in the 2025 timeframe for subsonic transport aircraft relative to the current fleet. This paper identifies a potential technology suite and analyzes the fuel burn reduction potential of these technologies when integrated into advanced subsonic transport concepts. Advanced tube-and-wing concepts are developed in the single aisle and large twin aisle class, and a hybrid-wing-body concept is developed for the large twin aisle class. The resulting fuel burn reductions for the advanced tube-and-wing concepts range from a 42% reduction relative to the 777-200 to a 44% reduction relative to the 737-800. In addition, the hybrid-wingbody design resulted in a 47% fuel burn reduction relative to the 777-200. Of course, to achieve these fuel burn reduction levels, a significant amount of technology and concept maturation is required between now and 2025. A methodology for capturing and tracking concept maturity is also developed and presented in this paper.
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-02-01
In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Representativeness of laboratory sampling procedures for the analysis of trace metals in soil.
Dubé, Jean-Sébastien; Boudreault, Jean-Philippe; Bost, Régis; Sona, Mirela; Duhaime, François; Éthier, Yannic
2015-08-01
This study was conducted to assess the representativeness of laboratory sampling protocols for purposes of trace metal analysis in soil. Five laboratory protocols were compared, including conventional grab sampling, to assess the influence of sectorial splitting, sieving, and grinding on measured trace metal concentrations and their variability. It was concluded that grinding was the most important factor in controlling the variability of trace metal concentrations. Grinding increased the reproducibility of sample mass reduction by rotary sectorial splitting by up to two orders of magnitude. Combined with rotary sectorial splitting, grinding increased the reproducibility of trace metal concentrations by almost three orders of magnitude compared to grab sampling. Moreover, results showed that if grinding is used as part of a mass reduction protocol by sectorial splitting, the effect of sieving on reproducibility became insignificant. Gy's sampling theory and practice was also used to analyze the aforementioned sampling protocols. While the theoretical relative variances calculated for each sampling protocol qualitatively agreed with the experimental variances, their quantitative agreement was very poor. It was assumed that the parameters used in the calculation of theoretical sampling variances may not correctly estimate the constitutional heterogeneity of soils or soil-like materials. Finally, the results have highlighted the pitfalls of grab sampling, namely, the fact that it does not exert control over incorrect sampling errors and that it is strongly affected by distribution heterogeneity.
A white paper: Operational efficiency. New approaches to future propulsion systems
NASA Technical Reports Server (NTRS)
Rhodes, Russel; Wong, George
1991-01-01
Advanced launch systems for the next generation of space transportation systems (1995 to 2010) must deliver large payloads (125,000 to 500,000 lbs) to low earth orbit (LEO) at one tenth of today's cost, or 300 to 400 $/lb of payload. This cost represents an order of magnitude reduction from the Titan unmanned vehicle cost of delivering payload to orbit. To achieve this sizable reduction, the operations cost as well as the engine cost must both be lower than current engine system. The Advanced Launch System (ALS) is studying advanced engine designs, such as the Space Transportation Main Engine (STME), which has achieved notable reduction in cost. The results are presented of a current study wherein another level of cost reduction can be achieved by designing the propulsion module utilizing these advanced engines for enhanced operations efficiency and reduced operations cost.
Variance Reduction in Simulation Experiments: A Mathematical-Statistical Framework.
1983-12-01
Handscomb (1964), Granovsky (1981), Rubinstein (1981), and Wilson (1983b). The use of conditional expectations (CE) will be described as the term is...8217- .. - - -f -. ""."-.-.’-..’.." . . ......... . -. . . --...... •- " --- . 106 Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte
Genetic and environmental influences on blood pressure variability: a study in twins.
Xu, Xiaojing; Ding, Xiuhua; Zhang, Xinyan; Su, Shaoyong; Treiber, Frank A; Vlietinck, Robert; Fagard, Robert; Derom, Catherine; Gielen, Marij; Loos, Ruth J F; Snieder, Harold; Wang, Xiaoling
2013-04-01
Blood pressure variability (BPV) and its reduction in response to antihypertensive treatment are predictors of clinical outcomes; however, little is known about its heritability. In this study, we examined the relative influence of genetic and environmental sources of variance of BPV and the extent to which it may depend on race or sex in young twins. Twins were enrolled from two studies. One study included 703 white twins (308 pairs and 87 singletons) aged 18-34 years, whereas another study included 242 white twins (108 pairs and 26 singletons) and 188 black twins (79 pairs and 30 singletons) aged 12-30 years. BPV was calculated from 24-h ambulatory blood pressure recording. Twin modeling showed similar results in the separate analysis in both twin studies and in the meta-analysis. Familial aggregation was identified for SBP variability (SBPV) and DBP variability (DBPV) with genetic factors and common environmental factors together accounting for 18-40% and 23-31% of the total variance of SBPV and DBPV, respectively. Unique environmental factors were the largest contributor explaining up to 82-77% of the total variance of SBPV and DBPV. No sex or race difference in BPV variance components was observed. The results remained the same after adjustment for 24-h blood pressure levels. The variance in BPV is predominantly determined by unique environment in youth and young adults, although familial aggregation due to additive genetic and/or common environment influences was also identified explaining about 25% of the variance in BPV.
Prieve, Kurt; Rice, Amanda; Raynor, Peter C
2017-08-01
The aims of this study were to evaluate sound levels produced by compressed air guns in research and development (R&D) environments, replace conventional air gun models with advanced noise-reducing air nozzles, and measure changes in sound levels to assess the effectiveness of the advanced nozzles as engineering controls for noise. Ten different R&D manufacturing areas that used compressed air guns were identified and included in the study. A-weighted sound level and Z-weighted octave band measurements were taken simultaneously using a single instrument. In each area, three sets of measurements, each lasting for 20 sec, were taken 1 m away and perpendicular to the air stream of the conventional air gun while a worker simulated typical air gun work use. Two different advanced noise-reducing air nozzles were then installed. Sound level and octave band data were collected for each of these nozzles using the same methods as for the original air guns. Both of the advanced nozzles provided sound level reductions of about 7 dBA, on average. The highest noise reductions measured were 17.2 dBA for one model and 17.7 dBA for the other. In two areas, the advanced nozzles yielded no sound level reduction, or they produced small increases in sound level. The octave band data showed strong similarities in sound level among all air gun nozzles within the 10-1,000 Hz frequency range. However, the advanced air nozzles generally had lower noise contributions in the 1,000-20,000 Hz range. The observed decreases at these higher frequencies caused the overall sound level reductions that were measured. Installing new advanced noise-reducing air nozzles can provide large sound level reductions in comparison to existing conventional nozzles, which has direct benefit for hearing conservation efforts.
DOT National Transportation Integrated Search
1995-08-01
KEYWORDS : RESEARCH AND DEVELOPMENT OR R&D, CRASH REDUCTION, FATALITIES REDUCTION, LATERAL GUIDANCE, LONGITUDINAL GUIDANCE, ADVANCED VEHICLE CONTROL & SAFETY SYSTEMS OR AVCSS, ADVANCED VEHICLE CONTROL SYSTEM OR AVCS, INTELLIGENT VEHICLE INITIATIV...
Xu, Lijuan; Song, Rhayun
2016-08-01
The purpose of the study was to determine how work-family-school role conflict and social support influence psychological well-being among registered nurses pursuing an advanced degree. A cross-sectional, correlational study design was used. Convenience sampling was used to recruit 320 registered nurses pursuing an advanced nursing degree at 13 hospitals in Korea, from June to October 2011. Data were analyzed using structural equation modeling with the AMOS program. Confirmatory factor analyses were conducted to evaluate the measurement model prior to the testing of study hypotheses before and after controlling for extraneous variables. The fit parameters of the modified model (χ(2)/df=2.01, GFI=0.91, AGFI=0.89, CFI=0.92, SRMR=0.068, and RMSEA=0.065) indicated its suitability as the research model. This model explained 45% of the variance in work-related psychological well-being and 52% of the variance in general psychological well-being. Both social support and work-family-school role conflict exerted significant effects on work-related psychological well-being and general psychological well-being. The findings of the present study imply that work-family-school role conflict influences the psychological well-being of registered nurses pursuing an advanced degree. It is necessary for nursing administrators to develop strategies to help registered nurses to manage their multiple roles and improve both their work-related psychological well-being and their general psychological well-being. Copyright © 2015 Elsevier Inc. All rights reserved.
An Analysis of the Readability of Financial Accounting Textbooks.
ERIC Educational Resources Information Center
Smith, Gerald; And Others
1981-01-01
The Flesch formula was used to calculate the readability of 15 financial accounting textbooks. The 15 textbooks represented introductory, intermediate, and advanced levels and also were classified by five different publishers. Two-way analysis of variance and Tukey's post hoc analysis revealed some significant differences. (Author/CT)
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Effects Of Desensitization Treatment On Core-Condition Training
ERIC Educational Resources Information Center
Fry, P. S.
1973-01-01
Pre- and posttest ratings on measures of helping skills such as empathy, respect, concreteness, and genuineness were obtained in the preliminary and advanced training. A significant training effect was obtained for both groups. Desensitization treatment was a significant source of variance for the experimental subjects in training. (Author/LA)
2001 NASA Seal/secondary Air System Workshop, Volume 1. Volume 1
NASA Technical Reports Server (NTRS)
Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor)
2002-01-01
The 2001 NASA Seal/Secondary Air System Workshop covered the following topics: (i) overview of NASA's Vision for 21st Century Aircraft; (ii) overview of NASA-sponsored Ultra-Efficient Engine Technology (UEET); (iii) reviews of sealing concepts, test results, experimental facilities, and numerical predictions; and (iv) reviews of material development programs relevant to advanced seals development. The NASA UEET overview illustrates for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. The NASA UEET program goals include an 8-to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to investigate advanced reusable space vehicle technologies (X-38) and advanced space ram/scramjet propulsion systems. Seal challenges posed by these advanced systems include high-temperature operation, resiliency at the operating temperature to accommodate sidewall flexing, and durability to last many missions.
Daugherty, Ana M; Bender, Andrew R; Yuan, Peng; Raz, Naftali
2016-06-01
Impairment of hippocampus-dependent cognitive processes has been proposed to underlie age-related deficits in navigation. Animal studies suggest a differential role of hippocampal subfields in various aspects of navigation, but that hypothesis has not been tested in humans. In this study, we examined the association between volume of hippocampal subfields and age differences in virtual spatial navigation. In a sample of 65 healthy adults (age 19-75 years), advanced age was associated with a slower rate of improvement operationalized as shortening of the search path over 25 learning trials on a virtual Morris water maze task. The deficits were partially explained by greater complexity of older adults' search paths. Larger subiculum and entorhinal cortex volumes were associated with a faster decrease in search path complexity, which in turn explained faster shortening of search distance. Larger Cornu Ammonis (CA)1-2 volume was associated with faster distance shortening, but not in path complexity reduction. Age differences in regional volumes collectively accounted for 23% of the age-related variance in navigation learning. Independent of subfield volumes, advanced age was associated with poorer performance across all trials, even after reaching the asymptote. Thus, subiculum and CA1-2 volumes were associated with speed of acquisition, but not magnitude of gains in virtual maze navigation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Derived variants at six genes explain nearly half of size reduction in dog breeds
Rimbault, Maud; Beale, Holly C.; Schoenebeck, Jeffrey J.; Hoopes, Barbara C.; Allen, Jeremy J.; Kilroy-Glynn, Paul; Wayne, Robert K.; Sutter, Nathan B.; Ostrander, Elaine A.
2013-01-01
Selective breeding of dogs by humans has generated extraordinary diversity in body size. A number of multibreed analyses have been undertaken to identify the genetic basis of this diversity. We analyzed four loci discovered in a previous genome-wide association study that used 60,968 SNPs to identify size-associated genomic intervals, which were too large to assign causative roles to genes. First, we performed fine-mapping to define critical intervals that included the candidate genes GHR, HMGA2, SMAD2, and STC2, identifying five highly associated markers at the four loci. We hypothesize that three of the variants are likely to be causative. We then genotyped each marker, together with previously reported size-associated variants in the IGF1 and IGF1R genes, on a panel of 500 domestic dogs from 93 breeds, and identified the ancestral allele by genotyping the same markers on 30 wild canids. We observed that the derived alleles at all markers correlated with reduced body size, and smaller dogs are more likely to carry derived alleles at multiple markers. However, breeds are not generally fixed at all markers; multiple combinations of genotypes are found within most breeds. Finally, we show that 46%–52.5% of the variance in body size of dog breeds can be explained by seven markers in proximity to exceptional candidate genes. Among breeds with standard weights <41 kg (90 lb), the genotypes accounted for 64.3% of variance in weight. This work advances our understanding of mammalian growth by describing genetic contributions to canine size determination in non-giant dog breeds. PMID:24026177
Asano, Kenichiro; Ogata, Ai; Tanaka, Keiko; Ide, Yoko; Sankoda, Akiko; Kawakita, Chieko; Nishikawa, Mana; Ohmori, Kazuyoshi; Kinomura, Masaru; Shimada, Noriaki; Fukushima, Masaki
2014-05-01
The aim of this study was to identify the main influencing factor of the shear wave velocity (SWV) of the kidneys measured by acoustic radiation force impulse elastography. The SWV was measured in the kidneys of 14 healthy volunteers and 319 patients with chronic kidney disease. The estimated glomerular filtration rate was calculated by the serum creatinine concentration and age. As an indicator of arteriosclerosis of large vessels, the brachial-ankle pulse wave velocity was measured in 183 patients. Compared to the degree of interobserver and intraobserver deviation, a large variance of SWV values was observed in the kidneys of the patients with chronic kidney disease. Shear wave velocity values in the right and left kidneys of each patient correlated well, with high correlation coefficients (r = 0.580-0.732). The SWV decreased concurrently with a decline in the estimated glomerular filtration rate. A low SWV was obtained in patients with a high brachial-ankle pulse wave velocity. Despite progression of renal fibrosis in the advanced stages of chronic kidney disease, these results were in contrast to findings for chronic liver disease, in which progression of hepatic fibrosis results in an increase in the SWV. Considering that a high brachial-ankle pulse wave velocity represents the progression of arteriosclerosis in the large vessels, the reduction of elasticity succeeding diminution of blood flow was suspected to be the main influencing factor of the SWV in the kidneys. This study indicates that diminution of blood flow may affect SWV values in the kidneys more than the progression of tissue fibrosis. Future studies for reducing data variance are needed for effective use of acoustic radiation force impulse elastography in patients with chronic kidney disease.
iTemplate: A template-based eye movement data analysis approach.
Xiao, Naiqi G; Lee, Kang
2018-02-08
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.
Physical heterogeneity control on effective mineral dissolution rates
NASA Astrophysics Data System (ADS)
Jung, Heewon; Navarre-Sitchler, Alexis
2018-04-01
Hydrologic heterogeneity may be an important factor contributing to the discrepancy in laboratory and field measured dissolution rates, but the governing factors influencing mineral dissolution rates among various representations of physical heterogeneity remain poorly understood. Here, we present multiple reactive transport simulations of anorthite dissolution in 2D latticed random permeability fields and link the information from local grid scale (1 cm or 4 m) dissolution rates to domain-scale (1m or 400 m) effective dissolution rates measured by the flux-weighted average of an ensemble of flow paths. We compare results of homogeneous models to heterogeneous models with different structure and layered permeability distributions within the model domain. Chemistry is simplified to a single dissolving primary mineral (anorthite) distributed homogeneously throughout the domain and a single secondary mineral (kaolinite) that is allowed to dissolve or precipitate. Results show that increasing size in correlation structure (i.e. long integral scales) and high variance in permeability distribution are two important factors inducing a reduction in effective mineral dissolution rates compared to homogeneous permeability domains. Larger correlation structures produce larger zones of low permeability where diffusion is an important transport mechanism. Due to the increased residence time under slow diffusive transport, the saturation state of a solute with respect to a reacting mineral approaches equilibrium and reduces the reaction rate. High variance in permeability distribution favorably develops large low permeability zones that intensifies the reduction in mixing and effective dissolution rate. However, the degree of reduction in effective dissolution rate observed in 1 m × 1 m domains is too small (<1% reduction from the corresponding homogeneous case) to explain several orders of magnitude reduction observed in many field studies. When multimodality in permeability distribution is approximated by high permeability variance in 400 m × 400 m domains, the reduction in effective dissolution rate increases due to the effect of long diffusion length scales through zones with very slow reaction rates. The observed scale dependence becomes complicated when pH dependent kinetics are compared to the results from pH independent rate constants. In small domains where the entire domain is reactive, faster anorthite dissolution rates and slower kaolinite precipitation rates relative to pH independent rates at far-from-equilibrium conditions reduce the effective dissolution rate by increasing the saturation state. However, in large domains where less- or non-reactive zones develop, higher kaolinite precipitation rates in less reactive zones increase the effective anorthite dissolution rates relative to the rates observed in pH independent cases.
Lourenco, Stella F.; Bonny, Justin W.; Fernandez, Edmund P.; Rao, Sonia
2012-01-01
Humans and nonhuman animals share the capacity to estimate, without counting, the number of objects in a set by relying on an approximate number system (ANS). Only humans, however, learn the concepts and operations of symbolic mathematics. Despite vast differences between these two systems of quantification, neural and behavioral findings suggest functional connections. Another line of research suggests that the ANS is part of a larger, more general system of magnitude representation. Reports of cognitive interactions and common neural coding for number and other magnitudes such as spatial extent led us to ask whether, and how, nonnumerical magnitude interfaces with mathematical competence. On two magnitude comparison tasks, college students estimated (without counting or explicit calculation) which of two arrays was greater in number or cumulative area. They also completed a battery of standardized math tests. Individual differences in both number and cumulative area precision (measured by accuracy on the magnitude comparison tasks) correlated with interindividual variability in math competence, particularly advanced arithmetic and geometry, even after accounting for general aspects of intelligence. Moreover, analyses revealed that whereas number precision contributed unique variance to advanced arithmetic, cumulative area precision contributed unique variance to geometry. Taken together, these results provide evidence for shared and unique contributions of nonsymbolic number and cumulative area representations to formally taught mathematics. More broadly, they suggest that uniquely human branches of mathematics interface with an evolutionarily primitive general magnitude system, which includes partially overlapping representations of numerical and nonnumerical magnitude. PMID:23091023
2002 NASA Seal/Secondary Air System Workshop. Volume 1
NASA Technical Reports Server (NTRS)
Steinetz, Bruce M. (Editor); Hendricks, Robert C. (Editor)
2003-01-01
The 2002 NASA Seal/Secondary Air System Workshop covered the following topics: (i) Overview of NASA s perspective of aeronautics and space technology for the 21st century; (ii) Overview of the NASA-sponsored Ultra-Efficient Engine Technology (UEET), Turbine-Based Combined-Cycle (TBCC), and Revolutionary Turbine Accelator (RTA) programs; (iii) Overview of NASA Glenn's seal program aimed at developing advanced seals for NASA's turbomachinery, space propulsion, and reentry vehicle needs; (iv) Reviews of sealing concepts, test results, experimental facilities, and numerical predictions; and (v) Reviews of material development programs relevant to advanced seals development. The NASA UEET and TBCC/RTA program overviews illustrated for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. For example, the NASA UEET program goals include an 8- to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to investigate advanced reusable space vehicle technologies (X-38) and advanced space ram/scramjet propulsion systems. Seal challenges posed by these advanced systems include high-temperature operation, resiliency at the operating temperature to accommodate sidewall flexing, and durability to last many missions.
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.
1997-12-31
This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advancedmore » digital control/optimization phase of the project.« less
Compression of Morbidity and Mortality: New Perspectives1
Stallard, Eric
2017-01-01
Compression of morbidity is a reduction over time in the total lifetime days of chronic disability, reflecting a balance between (1) morbidity incidence rates and (2) case-continuance rates—generated by case-fatality and case-recovery rates. Chronic disability includes limitations in activities of daily living and cognitive impairment, which can be covered by long-term care insurance. Morbidity improvement can lead to a compression of morbidity if the reductions in age-specific prevalence rates are sufficiently large to overcome the increases in lifetime disability due to concurrent mortality improvements and progressively higher disability prevalence rates with increasing age. Compression of mortality is a reduction over time in the variance of age at death. Such reductions are generally accompanied by increases in the mean age at death; otherwise, for the variances to decrease, the death rates above the mean age at death would need to increase, and this has rarely been the case. Mortality improvement is a reduction over time in the age-specific death rates and a corresponding increase in the cumulative survival probabilities and age-specific residual life expectancies. Mortality improvement does not necessarily imply concurrent compression of mortality. This paper reviews these concepts, describes how they are related, shows how they apply to changes in mortality over the past century and to changes in morbidity over the past 30 years, and discusses their implications for future changes in the United States. The major findings of the empirical analyses are the substantial slowdowns in the degree of mortality compression over the past half century and the unexpectedly large degree of morbidity compression that occurred over the morbidity/disability study period 1984–2004; evidence from other published sources suggests that morbidity compression may be continuing. PMID:28740358
Detecting Nonadditivity in Single-Facet Generalizability Theory Applications: Tukey's Test
ERIC Educational Resources Information Center
Lin, Chih-Kai; Zhang, Jinming
2018-01-01
Under the generalizability-theory (G-theory) framework, the estimation precision of variance components (VCs) is of significant importance in that they serve as the foundation of estimating reliability. Zhang and Lin advanced the discussion of nonadditivity in data from a theoretical perspective and showed the adverse effects of nonadditivity on…
It has been fifty years since Kirkham and Bartholmew (1954) presented the conceptual framework and derived the mathematical equations that formed the basis of the now commonly employed method of 15N isotope dilution. Although many advances in methodology and analysis have been ma...
USDA-ARS?s Scientific Manuscript database
Eddy covariance (EC) is a well-established, non-intrusive observational technique that has long been used to measure the net carbon balance of numerous ecosystems including crop lands for perennial crops such as orchards and vineyards, and pasturelands. While EC measures net carbon fluxes well, it ...
Advancing Multicultural Education: New Historicism in the High School English Classroom
ERIC Educational Resources Information Center
Li, Sidney C.
2015-01-01
High schools across the country are restructuring their curricular frameworks to meet the new Common Core State Standards (CCSS), which emphasize an understanding of cultural diversity in addition to critical thinking and literacy. Despite curricular variance among high schools, the significant roles non-white races have played in constructing a…
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hill, Emma M.; Ponte, Rui M.; Davis, James L.
2007-01-01
Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Crocker, Andy; Greene, William D.
2017-01-01
Goals of NASA's Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to: (1) Reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS. (2) Enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. SLS Block 1 vehicle is being designed to carry 70 mT to LEO: (1) Uses two five-segment solid rocket boosters (SRBs) similar to the boosters that helped power the space shuttle to orbit. Evolved 130 mT payload class rocket requires an advanced booster with more thrust than any existing U.S. liquid-or solid-fueled boosters
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... (OMB) for review, as required by the Paperwork Reduction Act. The Department is soliciting public... resultant costs also serve to further stabilize the mortgage insurance premiums charged by FHA and the... Insurance Benefits, HUD-90035 Information/Disclosure, HUD-90041 Request for Variance, Pre-foreclosure sale...
Decomposition of Some Well-Known Variance Reduction Techniques. Revision.
1985-05-01
34use a family of transformatlom to convert given samples into samples conditioned on a given characteristic (p. 04)." Dub and Horowitz (1979), Granovsky ...34Antithetic Varlates Revisited," Commun. ACM 26, 11, 064-971. Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte Carlo," SIAM J. Alg
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Conrad, Martina; Engelmann, Dorit; Friedrich, Michael; Scheffold, Katharina; Philipp, Rebecca; Schulz-Kindermann, Frank; Härter, Martin; Mehnert, Anja; Koranyi, Susan
2018-04-13
There are only a few valid instruments measuring couples' communication in patients with cancer for German speaking countries. The Couple Communication Scale (CCS) represents an established instrument to assess couples' communication. However, there is no evidence regarding the psychometric properties of the German version of the CCS until now and the assumed one factor structure of the CCS was not verified for patients with advanced cancer yet. The CCS was validated as a part of the study "Managing cancer and living meaningfully" (CALM) on N=136 patients with advanced cancer (≥18 years, UICC-state III/IV). The psychometric properties of the scale were calculated (factor reliability, item reliability, average variance extracted [DEV]) and a confirmatory factor analysis was conducted (Maximum Likelihood Estimation). The concurrent validity was tested against symptoms of anxiety (GAD-7), depression (BDI-II) and attachment insecurity (ECR-M16). In the confirmatory factor analysis, the one factor structure showed a low, but acceptable model fit and explained on average 49% of every item's variance (DEV). The CCS has an excellent internal consistency (Cronbachs α=0,91) and was negatively associated with attachment insecurity (ECR-M16: anxiety: r=- 0,55, p<0,01; avoidance: r=- 0,42, p<0,01) as well as with anxiety (GAD-7: r=- 0,20, p<0,05) and depression (BDI-II: r=- 0,27, p<0,01). The CCS is a reliable and valid instrument measuring couples' communication in patients with advanced cancer. © Georg Thieme Verlag KG Stuttgart · New York.
Willardson, Jeffrey M; Bressel, Eadric
2004-08-01
The purpose of this research was to devise prediction equations whereby a 10 repetition maximum (10RM) for the free weight parallel squat could be predicted using the following predictor variables: 10RM for the 45 degrees angled leg press, body mass, and limb length. Sixty men were tested over a 3-week period, with 1 testing session each week. During each testing session, subjects performed a 10RM for the free weight parallel squat and 45 degrees angled leg press. Stepwise multiple regression analysis showed leg press mass lifted to be a significant predictor of squat mass lifted for both the advanced and the novice groups (p < 0.05). Leg press mass lifted accounted for approximately 25% of the variance in squat mass lifted for the novice group and 55% of the variance in squat mass lifted for the advanced group. Limb length and body mass were not significant predictors of squat mass lifted for either group. The following prediction equations were devised: (a) novice group squat mass = leg press mass (0.210) + 36.244 kg, (b) advanced group squat mass = leg press mass (0.310) + 19.438 kg, and (c) subject pool squat mass = leg press mass (0.354) + 2.235 kg. These prediction equations may save time and reduce the risk of injury when switching from the leg press to the squat exercise.
Risk factors of chronic periodontitis on healing response: a multilevel modelling analysis.
Song, J; Zhao, H; Pan, C; Li, C; Liu, J; Pan, Y
2017-09-15
Chronic periodontitis is a multifactorial polygenetic disease with an increasing number of associated factors that have been identified over recent decades. Longitudinal epidemiologic studies have demonstrated that the risk factors were related to the progression of the disease. A traditional multivariate regression model was used to find risk factors associated with chronic periodontitis. However, the approach requirement of standard statistical procedures demands individual independence. Multilevel modelling (MLM) data analysis has widely been used in recent years, regarding thorough hierarchical structuring of the data, decomposing the error terms into different levels, and providing a new analytic method and framework for solving this problem. The purpose of our study is to investigate the relationship of clinical periodontal index and the risk factors in chronic periodontitis through MLM analysis and to identify high-risk individuals in the clinical setting. Fifty-four patients with moderate to severe periodontitis were included. They were treated by means of non-surgical periodontal therapy, and then made follow-up visits regularly at 3, 6, and 12 months after therapy. Each patient answered a questionnaire survey and underwent measurement of clinical periodontal parameters. Compared with baseline, probing depth (PD) and clinical attachment loss (CAL) improved significantly after non-surgical periodontal therapy with regular follow-up visits at 3, 6, and 12 months after therapy. The null model and variance component models with no independent variables included were initially obtained to investigate the variance of the PD and CAL reductions across all three levels, and they showed a statistically significant difference (P < 0.001), thus establishing that MLM data analysis was necessary. Site-level had effects on PD and CAL reduction; those variables could explain 77-78% of PD reduction and 70-80% of CAL reduction at 3, 6, and 12 months. Other levels only explain 20-30% of PD and CAL reductions. Site-level had the greatest effect on PD and CAL reduction. Non-surgical periodontal therapy with regular follow-up visits had a remarkable curative effect. All three levels had a substantial influence on the reduction of PD and CAL. Site-level had the largest effect on PD and CAL reductions.
Improved Hybrid Modeling of Spent Fuel Storage Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bibber, Karl van
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less
Fermentation and Hydrogen Metabolism Affect Uranium Reduction by Clostridia
Gao, Weimin; Francis, Arokiasamy J.
2013-01-01
Previously, it has been shown that not only is uranium reduction under fermentation condition common among clostridia species, but also the strains differed in the extent of their capability and the pH of the culture significantly affected uranium(VI) reduction. In this study, using HPLC and GC techniques, metabolic properties of those clostridial strains active in uranium reduction under fermentation conditions have been characterized and their effects on capability variance of uranium reduction discussed. Then, the relationship between hydrogen metabolism and uranium reduction has been further explored and the important role played by hydrogenase in uranium(VI) and iron(III) reduction by clostridiamore » demonstrated. When hydrogen was provided as the headspace gas, uranium(VI) reduction occurred in the presence of whole cells of clostridia. This is in contrast to that of nitrogen as the headspace gas. Without clostridia cells, hydrogen alone could not result in uranium(VI) reduction. In alignment with this observation, it was also found that either copper(II) addition or iron depletion in the medium could compromise uranium reduction by clostridia. In the end, a comprehensive model was proposed to explain uranium reduction by clostridia and its relationship to the overall metabolism especially hydrogen (H 2 ) production.« less
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
NASA Astrophysics Data System (ADS)
Niemi, Sami-Matias; Kitching, Thomas D.; Cropper, Mark
2015-12-01
One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.
Gear systems for advanced turboprops
NASA Technical Reports Server (NTRS)
Wagner, Douglas A.
1987-01-01
A new generation of transport aircraft will be powered by efficient, advanced turboprop propulsion systems. Systems that develop 5,000 to 15,000 horsepower have been studied. Reduction gearing for these advanced propulsion systems is discussed. Allison Gas Turbine Division's experience with the 5,000 horsepower reduction gearing for the T56 engine is reviewed and the impact of that experience on advanced gear systems is considered. The reliability needs for component design and development are also considered. Allison's experience and their research serve as a basis on which to characterize future gear systems that emphasize low cost and high reliability.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Feasibility of histogram analysis of susceptibility-weighted MRI for staging of liver fibrosis
Yang, Zhao-Xia; Liang, He-Yue; Hu, Xin-Xing; Huang, Ya-Qin; Ding, Ying; Yang, Shan; Zeng, Meng-Su; Rao, Sheng-Xiang
2016-01-01
PURPOSE We aimed to evaluate whether histogram analysis of susceptibility-weighted imaging (SWI) could quantify liver fibrosis grade in patients with chronic liver disease (CLD). METHODS Fifty-three patients with CLD who underwent multi-echo SWI (TEs of 2.5, 5, and 10 ms) were included. Histogram analysis of SWI images were performed and mean, variance, skewness, kurtosis, and the 1st, 10th, 50th, 90th, and 99th percentiles were derived. Quantitative histogram parameters were compared. For significant parameters, further receiver operating characteristic (ROC) analyses were performed to evaluate the potential diagnostic performance for differentiating liver fibrosis stages. RESULTS The number of patients in each pathologic fibrosis grade was 7, 3, 5, 5, and 33 for F0, F1, F2, F3, and F4, respectively. The results of variance (TE: 10 ms), 90th percentile (TE: 10 ms), and 99th percentile (TE: 10 and 5 ms) in F0–F3 group were significantly lower than in F4 group, with areas under the ROC curves (AUCs) of 0.84 for variance and 0.70–0.73 for the 90th and 99th percentiles, respectively. The results of variance (TE: 10 and 5 ms), 99th percentile (TE: 10 ms), and skewness (TE: 2.5 and 5 ms) in F0–F2 group were smaller than those of F3/F4 group, with AUCs of 0.88 and 0.69 for variance (TE: 10 and 5 ms, respectively), 0.68 for 99th percentile (TE: 10 ms), and 0.73 and 0.68 for skewness (TE: 2.5 and 5 ms, respectively). CONCLUSION Magnetic resonance histogram analysis of SWI, particularly the variance, is promising for predicting advanced liver fibrosis and cirrhosis. PMID:27113421
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
The utility of the cropland data layer for Forest Inventory and Analysis
Greg C. Liknes; Mark D. Nelson; Dale D. Gormanson; Mark Hansen
2009-01-01
The Forest Service, U.S. Department of Agriculture's (USDA's) Northern Research Station Forest Inventory and Analysis program (NRS-FIA) uses digital land cover products derived from remotely sensed imagery, such as the National Land Cover Dataset (NLCD), for the purpose of variance reduction via postsampling stratification. The update cycle of the NLCD...
Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lustbader, Jason Aaron; Kekelia, Bidzina; Tomerlin, Jeff
Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers willmore » enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation with a reflective barrier to the thermal load reduction package resulted in a 53.3% reduction in the overall heat transfer coefficient.« less
Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lustbader, Jason; Kekelia, Bidzina; Tomerlin, Jeff
Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers willmore » enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation with a reflective barrier to the thermal load reduction package resulted in a 53.3% reduction in the overall heat transfer coefficient.« less
Southwestern USA Drought over Multiple Millennia
NASA Astrophysics Data System (ADS)
Salzer, M. W.; Kipfmueller, K. F.
2014-12-01
Severe to extreme drought conditions currently exist across much of the American West. There is increasing concern that climate change may be worsening droughts in the West and particularly the Southwest. Thus, it is important to understand the role of natural variability and to place current conditions in a long-term context. We present a tree-ring derived reconstruction of regional-scale precipitation for the Southwestern USA over several millennia. A network of 48 tree-ring chronologies from California, Nevada, Utah, Arizona, New Mexico, and Colorado was used. All of the chronologies are at least 1,000 years long. The network was subjected to data reduction through PCA and a "nested" multiple linear regression reconstruction approach. The regression model was able to capture 72% of the variance in September-August precipitation over the last 1,000 years and 53% of the variance over the first millennium of the Common Era. Variance captured and spatial coverage further declined back in time as the shorter chronologies dropped out of the model, eventually reaching 24% of variance captured at 3250 BC. Results show regional droughts on decadal- to multi-decadal scales have been prominent and persistent phenomena in the region over the last several millennia. Anthropogenic warming is likely to exacerbate the effects of future droughts on human and other biotic populations.
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
Direct simulation of compressible turbulence in a shear flow
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.
1991-01-01
The purpose of this study is to investigate compressibility effects on the turbulence in homogeneous shear flow. It is found that the growth of the turbulent kinetic energy decreases with increasing Mach number, a phenomenon similar to the reduction of turbulent velocity intensities observed in experiments on supersonic free shear layers. An examination of the turbulent energy budget shows that both the compressible dissipation and the pressure-dilatation contribute to the decrease in the growth of kinetic energy. The pressure-dilatation is predominantly negative in homogeneous shear flow, in contrast to its predominantly positive behavior in isotropic turbulence. The different signs of the pressure-dilatation are explained by theoretical consideration of the equations for the pressure variance and density variance.
The Effect of Carbonaceous Reductant Selection on Chromite Pre-reduction
NASA Astrophysics Data System (ADS)
Kleynhans, E. L. J.; Beukes, J. P.; Van Zyl, P. G.; Bunt, J. R.; Nkosi, N. S. B.; Venter, M.
2017-04-01
Ferrochrome (FeCr) production is an energy-intensive process. Currently, the pelletized chromite pre-reduction process, also referred to as solid-state reduction of chromite, is most likely the FeCr production process with the lowest specific electricity consumption, i.e., MWh/t FeCr produced. In this study, the effects of carbonaceous reductant selection on chromite pre-reduction and cured pellet strength were investigated. Multiple linear regression analysis was employed to evaluate the effect of reductant characteristics on the aforementioned two parameters. This yielded mathematical solutions that can be used by FeCr producers to select reductants more optimally in future. Additionally, the results indicated that hydrogen (H)- (24 pct) and volatile content (45.8 pct) were the most significant contributors for predicting variance in pre-reduction and compressive strength, respectively. The role of H within this context is postulated to be linked to the ability of a reductant to release H that can induce reduction. Therefore, contrary to the current operational selection criteria, the authors believe that thermally untreated reductants ( e.g., anthracite, as opposed to coke or char), with volatile contents close to the currently applied specification (to ensure pellet strength), would be optimal, since it would maximize H content that would enhance pre-reduction.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
A multispecies tree ring reconstruction of Potomac River streamflow (950-2001)
NASA Astrophysics Data System (ADS)
Maxwell, R. Stockton; Hessl, Amy E.; Cook, Edward R.; Pederson, Neil
2011-05-01
Mean May-September Potomac River streamflow was reconstructed from 950-2001 using a network of tree ring chronologies (n = 27) representing multiple species. We chose a nested principal components reconstruction method to maximize use of available chronologies backward in time. Explained variance during the period of calibration ranged from 20% to 53% depending on the number and species of chronologies available in each 25 year time step. The model was verified by two goodness of fit tests, the coefficient of efficiency (CE) and the reduction of error statistic (RE). The RE and CE never fell below zero, suggesting the model had explanatory power over the entire period of reconstruction. Beta weights indicated a loss of explained variance during the 1550-1700 period that we hypothesize was caused by the reduction in total number of predictor chronologies and loss of important predictor species. Thus, the reconstruction is strongest from 1700-2001. Frequency, intensity, and duration of drought and pluvial events were examined to aid water resource managers. We found that the instrumental period did not represent adequately the full range of annual to multidecadal variability present in the reconstruction. Our reconstruction of mean May-September Potomac River streamflow was a significant improvement over the Cook and Jacoby (1983) reconstruction because it expanded the seasonal window, lengthened the record by 780 years, and better replicated the mean and variance of the instrumental record. By capitalizing on variable phenologies and tree growth responses to climate, multispecies reconstructions may provide significantly more information about past hydroclimate, especially in regions with low aridity and high tree species diversity.
An Updated Assessment of NASA Ultra-Efficient Engine Technologies
NASA Technical Reports Server (NTRS)
Tong Michael T.; Jones, Scott M.
2005-01-01
NASA's Ultra Efficient Engine Technology (UEET) project features advanced aeropropulsion technologies that include highly loaded turbomachinery, an advanced low-NOx combustor, high-temperature materials, and advanced fan containment technology. A probabilistic system assessment is performed to evaluate the impact of these technologies on aircraft CO2 (or equivalent fuel burn) and NOx reductions. A 300-passenger aircraft, with two 396-kN thrust (85,000-lb) engines is chosen for the study. The results show that a large subsonic aircraft equipped with the current UEET technology portfolio has very high probabilities of meeting the UEET minimum success criteria for CO2 reduction (-12% from the baseline) and LTO (landing and takeoff) NOx reductions (-65% relative to the 1996 International Civil Aviation Organization rule).
Application of advanced technologies to small, short-haul aircraft
NASA Technical Reports Server (NTRS)
Andrews, D. G.; Brubaker, P. W.; Bryant, S. L.; Clay, C. W.; Giridharadas, B.; Hamamoto, M.; Kelly, T. J.; Proctor, D. K.; Myron, C. E.; Sullivan, R. L.
1978-01-01
The results of a preliminary design study which investigates the use of selected advanced technologies to achieve low cost design for small (50-passenger), short haul (50 to 1000 mile) transports are reported. The largest single item in the cost of manufacturing an airplane of this type is labor. A careful examination of advanced technology to airframe structure was performed since one of the most labor-intensive parts of the airplane is structures. Also, preliminary investigation of advanced aerodynamics flight controls, ride control and gust load alleviation systems, aircraft systems and turbo-prop propulsion systems was performed. The most beneficial advanced technology examined was bonded aluminum primary structure. The use of this structure in large wing panels and body sections resulted in a greatly reduced number of parts and fasteners and therefore, labor hours. The resultant cost of assembled airplane structure was reduced by 40% and the total airplane manufacturing cost by 16% - a major cost reduction. With further development, test verification and optimization appreciable weight saving is also achievable. Other advanced technology items which showed significant gains are as follows: (1) advanced turboprop-reduced block fuel by 15.30% depending on range; (2) configuration revisions (vee-tail)-empennage cost reduction of 25%; (3) leading-edge flap addition-weight reduction of 2500 pounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
Small Engine Component Technology (SECT) studies
NASA Technical Reports Server (NTRS)
Meyer, P. K.; Harbour, L.
1986-01-01
A study was conducted to identify component technology requirements for small, expendable gas turbine engines that would result in substantial improvements in performance and cost by the year 2000. A subsonic, 2600 nautical mile (4815 km) strategic cruise missile mission was selected for study. A baseline (state-of-the-art) engine and missile configuration were defined to evaluate the advanced technology engines. Two advanced technology engines were configured and evaluated using advanced component efficiencies and ceramic composite materials; a 22:1 overall pressure ratio, 3.85 bypass ratio twin-spool turbofan; and an 8:1 overall pressure, 3.66 bypass ratio, single-spool recuperated turbofan with 0.85 recuperator effectiveness. Results of mission analysis indicated a reduction in fuel burn of 38 and 47 percent compared to the baseline engine when using the advanced turbofan and recuperated turbofan, respectively. While use of either advanced engine resulted in approximately a 25 percent reduction in missile size, the unit life cycle (LCC) cost reduction of 56 percent for the advanced turbofan relative to the baseline engine gave it a decisive advantage over the recuperated turbofan with 47 percent LCC reduction. An additional range improvement of 10 percent results when using a 56 percent loaded carbon slurry fuel with either engine. These results can be realized only if significant progress is attained in the fields of solid lubricated bearings, small aerodynamic component performance, composite ceramic materials and integration of slurry fuels. A technology plan outlining prospective programs in these fields is presented.
Lim, Sanghyeok; Kim, Seung Hyun; Kim, Yongsoo; Cho, Young Seo; Kim, Tae Yeob; Jeong, Woo Kyoung; Sohn, Joo Hyun
2018-02-01
To compare the diagnostic performance for advanced hepatic fibrosis measured by 2D shear-wave elastography (SWE), using either the coefficient of variance (CV) or the interquartile range divided by the median value (IQR/M) as quality criteria. In this retrospective study, from January 2011 to December 2013, 96 patients, who underwent both liver stiffness measurement by 2D SWE and liver biopsy for hepatic fibrosis grading, were enrolled. The diagnostic performances of the CV and the IQR/M were analyzed using receiver operating characteristic curves with areas under the curves (AUCs) and were compared by Fisher's Z test, based on matching the cutoff points in an interactive dot diagram. All P values less than 0.05 were considered significant. When using the cutoff value IQR/M of 0.21, the matched cutoff point of CV was 20%. When a cutoff value of CV of 20% was used, the diagnostic performance for advanced hepatic fibrosis ( ≥ F3 grade) with CV of less than 20% was better than that in the group with CV greater than or equal to 20% (AUC 0.967 versus 0.786, z statistic = 2.23, P = .025), whereas when the matched cutoff value IQR/M of 0.21 showed no difference (AUC 0.918 versus 0.927, z statistic = -0.178, P = .859). The validity of liver stiffness measurements made by 2D SWE for assessing advanced hepatic fibrosis may be judged using CVs, and when the CV is less than 20% it can be considered "more reliable" than using IQR/M of less than 0.21. © 2017 by the American Institute of Ultrasound in Medicine.
Multiple Regression as a Flexible Alternative to ANOVA in L2 Research
ERIC Educational Resources Information Center
Plonsky, Luke; Oswald, Frederick L.
2017-01-01
Second language (L2) research relies heavily and increasingly on ANOVA (analysis of variance)-based results as a means to advance theory and practice. This fact alone should merit some reflection on the utility and value of ANOVA. It is possible that we could use this procedure more appropriately and, as argued here, other analyses such as…
2012-09-01
by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B
Attentional models of multitask pilot performance using advanced display technology.
Wickens, Christopher D; Goh, Juliana; Helleberg, John; Horrey, William J; Talleur, Donald A
2003-01-01
In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.
Beyond promiscuity: mate-choice commitments in social breeding
Boomsma, Jacobus J.
2013-01-01
Obligate eusociality with distinct caste phenotypes has evolved from strictly monogamous sub-social ancestors in ants, some bees, some wasps and some termites. This implies that no lineage reached the most advanced form of social breeding, unless helpers at the nest gained indirect fitness values via siblings that were identical to direct fitness via offspring. The complete lack of re-mating promiscuity equalizes sex-specific variances in reproductive success. Later, evolutionary developments towards multiple queen-mating retained lifetime commitment between sexual partners, but reduced male variance in reproductive success relative to female's, similar to the most advanced vertebrate cooperative breeders. Here, I (i) discuss some of the unique and highly peculiar mating system adaptations of eusocial insects; (ii) address ambiguities that remained after earlier reviews and extend the monogamy logic to the evolution of soldier castes; (iii) evaluate the evidence for indirect fitness benefits driving the dynamics of (in)vertebrate cooperative breeding, while emphasizing the fundamental differences between obligate eusociality and cooperative breeding; (iv) infer that lifetime commitment is a major driver towards higher levels of organization in bodies, colonies and mutualisms. I argue that evolutionary informative definitions of social systems that separate direct and indirect fitness benefits facilitate transparency when testing inclusive fitness theory. PMID:23339241
NASA Astrophysics Data System (ADS)
Vech, Daniel; Chen, Christopher
2016-04-01
One of the most important features of the plasma turbulence is the anisotropy, which arises due to the presence of the magnetic field. The understanding of the anisotropy is particularly important to reveal how the turbulent cascade operates. It is well known that anisotropy exists with respect to the mean magnetic field, however recent theoretical studies suggested anisotropy with respect to the radial direction. The purpose of this study is to investigate the variance and spectral anisotropies of the solar wind turbulence with multiple point spacecraft observations. The study includes the Advanced Composition Analyzer (ACE), WIND and Cluster spacecraft data. The second order structure functions are derived for two different spacecraft configurations: when the pair of spacecraft are separated radially (with respect to the spacecraft -Sun line) and when they are separated along the transverse direction. We analyze the effect of the different sampling directions on the variance anisotropy, global spectral anisotropy, local 3D spectral anisotropy and discuss the implications for our understanding of solar wind turbulence.
Classification of collected trot, passage and piaffe based on temporal variables.
Clayton, H M
1997-05-01
The objective was to determine whether collected trot, passage and piaffe could be distinguished as separate gaits on the basis of temporal variables. Sagittal plane, 60 Hz videotapes of 10 finalists in the dressage competitions at the 1992 Olympic Games were analysed to measure the temporal variables in absolute terms and as percentages of stride duration. Classification was based on analysis of variance, a graphical method and discriminant analysis. Stride duration was sufficient to distinguish collected trot from passage and piaffe in all horses. The analysis of variance showed that the mean values of most variables differed significantly between passage and piaffe. When hindlimb stance percentage was plotted against diagonal advanced placement percentage, some overlap was found between all 3 movements indicating that individual horses could not be classified reliably in this manner. Using hindlimb stance percentage and diagonal advanced placement percentage as input in a discriminant analysis, 80% of the cases were classified correctly, but at least one horse was misclassified in each movement. When the absolute, rather than percentage, values of the 2 variables were used as input in the discriminant analysis, 90% of the cases were correctly classified and the only misclassifications were between passage and piaffe. However, the 2 horses in which piaffe was misclassified as passage were the gold and silver medallists. In general, higher placed horses tended toward longer diagonal advanced placements, especially in collected trot and passage, and shorter hindlimb stance percentages in passage and piaffe.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, T.E.; Mihalczo, J.T.
1995-12-31
This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.; Henson, Robert
2012-01-01
A measure of "clusterability" serves as the basis of a new methodology designed to preserve cluster structure in a reduced dimensional space. Similar to principal component analysis, which finds the direction of maximal variance in multivariate space, principal cluster axes find the direction of maximum clusterability in multivariate space.…
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Casemix classification payment for sub-acute and non-acute inpatient care, Thailand.
Khiaocharoen, Orathai; Pannarunothai, Supasit; Zungsontiporn, Chairoj; Riewpaiboon, Wachara
2010-07-01
There is a need to develop other casemix classifications, apart from DRG for sub-acute and non-acute inpatient care payment mechanism in Thailand. To develop a casemix classification for sub-acute and non-acute inpatient service. The study began with developing a classification system, analyzing cost, assigning payment weights, and ended with testing the validity of this new casemix system. Coefficient of variation, reduction in variance, linear regression, and split-half cross-validation were employed. The casemix for sub-acute and non-acute inpatient services contained 98 groups. Two percent of them had a coefficient of variation of the cost of higher than 1.5. The reduction in variance of cost after the classification was 32%. Two classification variables (physical function and the rehabilitation impairment categories) were key determinants of the cost (adjusted R2 = 0.749, p = .001). Validity results of split-half cross-validation of sub-acute and non-acute inpatient service were high. The present study indicated that the casemix for sub-acute and non-acute inpatient services closely predicted the hospital resource use and should be further developed for payment of the inpatients sub-acute and non-acute phase.
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Variance-reduction normalization technique for a compton camera system
NASA Astrophysics Data System (ADS)
Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.
2011-01-01
For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.
Gary W. Miller; Patrick H. Brose; Kurt W. Gottschalk
2017-01-01
Advanced northern red oak (Quercus rubra) seedlings in an 85-year-old forest located in north-central Pennsylvania were observed for 10 years after manipulation of available sunlight by shelterwood treatments, reduction of interfering plants by broadcast herbicides and/or a single prescribed fire, and reduction of deer damage by fencing. Twenty-...
Holocene constraints on simulated tropical Pacific climate
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Cobb, K. M.; Carre, M.; Braconnot, P.; Leloup, J.; Zhou, Y.; Harrison, S. P.; Correge, T.; Mcgregor, H. V.; Collins, M.; Driscoll, R.; Elliot, M.; Schneider, B.; Tudhope, A. W.
2015-12-01
The El Niño-Southern Oscillation (ENSO) influences climate and weather worldwide, so uncertainties in its response to external forcings contribute to the spread in global climate projections. Theoretical and modeling studies have argued that such forcings may affect ENSO either via the seasonal cycle, the mean state, or extratropical influences, but these mechanisms are poorly constrained by the short instrumental record. Here we synthesize a pan-Pacific network of high-resolution marine biocarbonates spanning discrete snapshots of the Holocene (past 10, 000 years of Earth's history), which we use to constrain a set of global climate model (GCM) simulations via a forward model and a consistent treatment of uncertainty. Observations suggest important reductions in ENSO variability throughout the interval, most consistently during 3-5 kyBP, when approximately 2/3 reductions are inferred. The magnitude and timing of these ENSO variance reductions bear little resemblance to those sim- ulated by GCMs, or to equatorial insolation. The central Pacific witnessed a mid-Holocene increase in seasonality, at odds with the reductions simulated by GCMs. Finally, while GCM aggregate behavior shows a clear inverse relationship between seasonal amplitude and ENSO-band variance in sea-surface temperature, in agreement with many previous studies, such a relationship is not borne out by these observations. Our synthesis suggests that tropical Pacific climate is highly variable, but exhibited millennia-long periods of reduced ENSO variability whose origins, whether forced or unforced, contradict existing explanations. It also points to deficiencies in the ability of current GCMs to simulate forced changes in the tropical Pacific seasonal cycle and its interaction with ENSO, highlighting a key area of growth for future modeling efforts.
NASA Technical Reports Server (NTRS)
Gray, D. E.; Dugan, J. F.
1975-01-01
This paper reports on the exploratory investigation and initial findings of the study of future turbofan concepts to conserve fuel. To date, these studies have indicated a potential reduction in cruise thrust specific fuel consumption in 1990 turbofans of approximately 15% relative to present day new engines through advances in internal aerodynamics, structure-mechanics, and materials. Advanced materials also offer the potential for fuel savings through engine weight reduction. Further studies are required to balance fuel consumption reduction with sound airlines operational economics.
NASA Technical Reports Server (NTRS)
Braslow, A. L.; Whitehead, A. H., Jr.
1973-01-01
The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.
Oxidation-Reduction Resistance of Advanced Copper Alloys
NASA Technical Reports Server (NTRS)
Greenbauer-Seng, L. (Technical Monitor); Thomas-Ogbuji, L.; Humphrey, D. L.; Setlock, J. A.
2003-01-01
Resistance to oxidation and blanching is a key issue for advanced copper alloys under development for NASA's next generation of reusable launch vehicles. Candidate alloys, including dispersion-strengthened Cu-Cr-Nb, solution-strengthened Cu-Ag-Zr, and ODS Cu-Al2O3, are being evaluated for oxidation resistance by static TGA exposures in low-p(O2) and cyclic oxidation in air, and by cyclic oxidation-reduction exposures (using air for oxidation and CO/CO2 or H2/Ar for reduction) to simulate expected service environments. The test protocol and results are presented.
Dzib-Guerra, Wendy del C.; Escalante-Erosa, Fabiola; García-Sosa, Karlina; Derbré, Séverine; Blanchard, Patricia; Richomme, Pascal; Peña-Rodríguez, Luis M.
2016-01-01
Background: Formation and accumulation of advanced glycation end-products (AGE) is recognized as a major pathogenic process in diabetic complications, atherosclerosis and cardiovascular diseases. In addition, reactive oxygen species and free radicals have also been reported to participate in AGE formation and in cell damage. Natural products with antioxidant and antiAGE activity have great therapeutic potential in the treatment of diabetes, hypertension and related complications. Objective: to test ethanolic extracts and aqueous-traditional preparations of plants used to treat diabetes, hypertension and obesity in Yucatecan traditional medicine for their anti-AGE and free radical scavenging activities. Materials and Methods: ethanolic extracts of leaves, stems and roots of nine medicinal plants, together with their traditional preparations, were prepared and tested for their anti-AGE and antioxidant activities using the inhibition of advanced glycation end products and DPPH radical scavenging assays, respectively. Results: the root extract of C. fistula (IC50= 0.1 mg/mL) and the leaf extract of P. auritum (IC50= 0.35 mg/mL) presented significant activity against vesperlysine and pentosidine-like AGE. Although none of the aqueous traditional preparations showed significant activity in the anti-AGE assay, both the traditional preparations and the ethanolic extracts of E. tinifolia, M. zapota, O. campechianum and P. auritum showed significant activity in the DPPH reduction assay. Conclusions: the results suggest that the metabolites responsible for the detected radical-scavenging activity are different to those involved in inhibiting AGE formation; however, the extracts with antioxidant activity may contain other metabolites which are able to prevent AGE formation through a different mechanism. SUMMARY Ethanolic extracts from nine plants used to treat diabetes, hypertension and obesity in Yucatecan traditional medicine were tested for their anti-AGE and free radical scavenging activities.Significant activity against vesperlysine and pentosidine-like AGE was detected in the root extract of Cassia fistula and the leaf extract of Piper auritum.Traditional preparations and the ethanolic extracts of Ehretia tinifolia, Manilkara zapota, Ocimum campechianum and Piper auritum showed significant activity in the DPPH reduction assay.Results suggest that the metabolites responsible for the detected radical-scavenging activity are different to those involved in inhibiting AGE formation. Abbreviations Used: AGE: Advanced glycation end-product; DPPH: 2,2-Diphenyl-1-picrylhydrazyl; DM: Diabetes mellitus; ROS: Reactive oxygen species; BSA: Bovine serum albumin; EtOH: Ethanol; EtOAc: Ethyl acetate; ANOVA: Analysis of variance; BA: Brosimum alicastrum; BS: Bunchosia swartziana; CF: Cassia fistula; CN: Cocos nucifera; ET: Ehretia tinifolia; MZ: Manilkara zapota; OC: Ocimum campechianum; PA: Piper auritum; RM: Rhizophora mangle; L: Leaves; S: Stems; R: Roots; T: traditional preparation; I: Inflorescences; W: Water PMID:27695268
Dzib-Guerra, Wendy Del C; Escalante-Erosa, Fabiola; García-Sosa, Karlina; Derbré, Séverine; Blanchard, Patricia; Richomme, Pascal; Peña-Rodríguez, Luis M
2016-01-01
Formation and accumulation of advanced glycation end-products (AGE) is recognized as a major pathogenic process in diabetic complications, atherosclerosis and cardiovascular diseases. In addition, reactive oxygen species and free radicals have also been reported to participate in AGE formation and in cell damage. Natural products with antioxidant and antiAGE activity have great therapeutic potential in the treatment of diabetes, hypertension and related complications. Objective: to test ethanolic extracts and aqueous-traditional preparations of plants used to treat diabetes, hypertension and obesity in Yucatecan traditional medicine for their anti-AGE and free radical scavenging activities. ethanolic extracts of leaves, stems and roots of nine medicinal plants, together with their traditional preparations, were prepared and tested for their anti-AGE and antioxidant activities using the inhibition of advanced glycation end products and DPPH radical scavenging assays, respectively. the root extract of C. fistula (IC 50 = 0.1 mg/mL) and the leaf extract of P. auritum (IC 50 = 0.35 mg/mL) presented significant activity against vesperlysine and pentosidine-like AGE. Although none of the aqueous traditional preparations showed significant activity in the anti-AGE assay, both the traditional preparations and the ethanolic extracts of E. tinifolia, M. zapota, O. campechianum and P. auritum showed significant activity in the DPPH reduction assay. the results suggest that the metabolites responsible for the detected radical-scavenging activity are different to those involved in inhibiting AGE formation; however, the extracts with antioxidant activity may contain other metabolites which are able to prevent AGE formation through a different mechanism. Ethanolic extracts from nine plants used to treat diabetes, hypertension and obesity in Yucatecan traditional medicine were tested for their anti-AGE and free radical scavenging activities.Significant activity against vesperlysine and pentosidine-like AGE was detected in the root extract of Cassia fistula and the leaf extract of Piper auritum .Traditional preparations and the ethanolic extracts of Ehretia tinifolia, Manilkara zapota, Ocimum campechianum and Piper auritum showed significant activity in the DPPH reduction assay.Results suggest that the metabolites responsible for the detected radical-scavenging activity are different to those involved in inhibiting AGE formation. Abbreviations Used : AGE: Advanced glycation end-product; DPPH: 2,2-Diphenyl-1-picrylhydrazyl; DM: Diabetes mellitus; ROS: Reactive oxygen species; BSA: Bovine serum albumin; EtOH: Ethanol; EtOAc: Ethyl acetate; ANOVA: Analysis of variance; BA: Brosimum alicastrum ; BS: Bunchosia swartziana ; CF: Cassia fistula ; CN: Cocos nucifera ; ET: Ehretia tinifolia ; MZ: Manilkara zapota ; OC: Ocimum campechianum ; PA: Piper auritum ; RM: Rhizophora mangle ; L: Leaves; S: Stems; R: Roots; T: traditional preparation; I: Inflorescences; W: Water.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
Improving Lidar Turbulence Estimates for Wind Energy: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew
2016-10-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu
2016-12-15
Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of tomotherapy MVCT image enhancement program for tumor volume delineation
Martin, Spencer; Rodrigues, George; Chen, Quan; Pavamani, Simon; Read, Nancy; Ahmad, Belal; Hammond, J. Alex; Venkatesan, Varagur; Renaud, James
2011-01-01
The aims of this study were to investigate the variability between physicians in delineation of head and neck tumors on original tomotherapy megavoltage CT (MVCT) studies and corresponding software enhanced MVCT images, and to establish an optimal approach for evaluation of image improvement. Five physicians contoured the gross tumor volume (GTV) for three head and neck cancer patients on 34 original and enhanced MVCT studies. Variation between original and enhanced MVCT studies was quantified by DICE coefficient and the coefficient of variance. Based on volume of agreement between physicians, higher correlation in terms of average DICE coefficients was observed in GTV delineation for enhanced MVCT for patients 1, 2, and 3 by 15%, 3%, and 7%, respectively, while delineation variance among physicians was reduced using enhanced MVCT for 12 of 17 weekly image studies. Enhanced MVCT provides advantages in reduction of variance among physicians in delineation of the GTV. Agreement on contouring by the same physician on both original and enhanced MVCT was equally high. PACS numbers: 87.57.N‐, 87.57.np, 87.57.nt
Improving Lidar Turbulence Estimates for Wind Energy
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...
2016-10-03
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less
High Pressure Low NOx Emissions Research: Recent Progress at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Chi-Ming, Lee; Tacina, Kathleen M.; Wey, Changlie
2007-01-01
In collaboration with U.S. aircraft engine companies, NASA Glenn Research Center has contributed to the advancement of low emissions combustion systems. For the High Speed Research Program (HSR), a 90% reduction in nitrogen oxides (NOx) emissions (relative to the then-current state of the art) has been demonstrated in sector rig testing at General Electric Aircraft Engines (GEAE). For the Advanced Subsonic Technology Program (AST), a 50% reduction in NOx emissions relative to the 1996 International Civil Aviation Organization (ICAO) standards has been at demonstrated in sector rigs at both GEAE and Pratt & Whitney (P&W). During the Ultra Efficient Engine Technology Program (UEET), a 70% reduction in NOx emissions, relative to the 1996 ICAO standards, was achieved in sector rig testing at Glenn in the world class Advanced Subsonic Combustion Rig (ASCR) and at contractor facilities. Low NOx combustor development continues under the Fundamental Aeronautics Program. To achieve these reductions, experimental and analytical research has been conducted to advance the understanding of emissions formation in combustion processes. Lean direct injection (LDI) concept development uses advanced laser-based non-intrusive diagnostics and analytical work to complement the emissions measurements and to provide guidance for concept improvement. This paper describes emissions results from flametube tests of a 9- injection-point LDI fuel/air mixer tested at inlet pressures up to 5500 kPa. Sample results from CFD and laser diagnostics are also discussed.
NASA Glenn High Pressure Low NOx Emissions Research
NASA Technical Reports Server (NTRS)
Tacina, Kathleen M.; Wey, Changlie
2008-01-01
In collaboration with U.S. aircraft engine companies, NASA Glenn Research Center has contributed to the advancement of low emissions combustion systems. For the High Speed Research Program (HSR), a 90% reduction in nitrogen oxides (NOx) emissions (relative to the then-current state of the art) has been demonstrated in sector rig testing at General Electric Aircraft Engines (GEAE). For the Advanced Subsonic Technology Program (AST), a 50% reduction in NOx emissions relative to the 1996 International Civil Aviation Organization (ICAO) standards has been demonstrated in sector rigs at both GEAE and Pratt & Whitney (P&W). During the Ultra Efficient Engine Technology Program (UEET), a 70% reduction in NOx emissions, relative to the 1996 ICAO standards, was achieved in sector rig testing at Glenn in the world class Advanced Subsonic Combustion Rig (ASCR) and at contractor facilities. Low NOx combustor development continues under the Fundamental Aeronautics Program. To achieve these reductions, experimental and analytical research has been conducted to advance the understanding of emissions formation in combustion processes. Lean direct injection (LDI) concept development uses advanced laser-based non-intrusive diagnostics and analytical work to complement the emissions measurements and to provide guidance for concept improvement. This paper describes emissions results from flametube tests of a 9-injection-point LDI fuel/air mixer tested at inlet pressures up to 5500 kPa. Sample results from CFD and laser diagnostics are also discussed.
Problems with change in R2 as applied to theory of reasoned action research.
Trafimow, David
2004-12-01
The paradigm of choice for theory of reasoned action research seems to depend largely on the notion of change in variance accounted for (DeltaR2) as new independent variables are added to a multiple regression equation. If adding a particular independent variable of interest increases the variance in the dependent variable that can be accounted for by the list of independent variables, then the research is deemed to be 'successful', and the researcher is considered to have made a convincing argument about the importance of the new variable. In contrast to this trend, I present arguments that suggest serious problems with the paradigm, and conclude that studies on attitude-behaviour relations would advance the field of psychology to a far greater extent if researchers abandoned it.
Video denoising using low rank tensor decomposition
NASA Astrophysics Data System (ADS)
Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting
2017-03-01
Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.
Lee, Jounghee; Park, Sohyun
2016-04-01
The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods.
Ivezić, Slađana Štrkalj; Sesar, Marijan Alfonso; Mužinić, Lana
2017-03-01
Self-stigma adversely affects recovery from schizophrenia. Analyses of self stigma reduction programs discovered that few studies have investigated the impact of education about the illness on self-stigma reduction. The objective of this study was to determine whether psychoeducation based on the principles of recovery and empowerment using therapeutic group factors assists in reduction of self-stigma, increased empowerment and reduced perception of discrimination in patients with schizophrenia. 40 patients participated in psychoeducation group program and were compared with a control group of 40 patients placed on the waiting list for the same program. A Solomon four group design was used to control the influence of the pretest. Rating scales were used to measure internalized stigma, empowerment and perception of discrimination. Two-way analysis of variance was used to determine the main effects and interaction between the treatment and pretest. Simple analysis of variance with repeated measures was used to additionally test effect of treatment onself-stigma, empowerment and perceived discrimination. The participants in the psychoeducation group had lower scores on internalized stigma (F(1,76)=8.18; p<0.01) than the patients treated as usual. Analysis also confirmed the same effect with comparing experimental group before and after psychoeducation (F(1,19)=5.52; p<0.05). All participants showed a positive trend for empowerment. Psychoeducation did not influence perception of discrimination. Group psychoeducation decreased the level of self stigma. This intervention can assist in recovery from schizophrenia.
NASA Technical Reports Server (NTRS)
Goodall, R. G.; Painter, G. W.
1975-01-01
Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
Preliminary Study of Advanced Turboprops for Low Energy Consumption
NASA Technical Reports Server (NTRS)
Kraft, G. A.; Strack, W. C.
1975-01-01
The fuel savings potential of advanced turboprops (operational about 1985) was calculated and compared with that of an advanced turbofan for use in an advanced subsonic transport. At the design point, altitude 10.67 km and Mach 0.80, turbine-inlet temperature was fixed at 1590 K while overall pressure ratio was varied from 25 to 50. The regenerative turboprop had a pressure ratio of only 10 and an 85 percent effective rotary heat exchanger. Variable camber propellers were used with an efficiency of 85 percent. The study indicated a fuel savings of 33 percent, a takeoff gross weight reduction of 15 percent, and a direct operating cost reduction of 18 percent was possible when turboprops were used instead of the reference turbofan at a range of 10 200 km. These reductions were 28, 11, and 14 percent, respectively, at a range of 5500 km. Increasing overall pressure ratio from 25 to 50 saved little fuel and slightly increased takeoff gross weight.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
NASA Noise Reduction Program for Advanced Subsonic Transports
NASA Technical Reports Server (NTRS)
Stephens, David G.; Cazier, F. W., Jr.
1995-01-01
Aircraft noise is an important byproduct of the world's air transportation system. Because of growing public interest and sensitivity to noise, noise reduction technology is becoming increasingly important to the unconstrained growth and utilization of the air transportation system. Unless noise technology keeps pace with public demands, noise restrictions at the international, national and/or local levels may unduly constrain the growth and capacity of the system to serve the public. In recognition of the importance of noise technology to the future of air transportation as well as the viability and competitiveness of the aircraft that operate within the system, NASA, the FAA and the industry have developed noise reduction technology programs having application to virtually all classes of subsonic and supersonic aircraft envisioned to operate far into the 21st century. The purpose of this paper is to describe the scope and focus of the Advanced Subsonic Technology Noise Reduction program with emphasis on the advanced technologies that form the foundation of the program.
2004 NASA Seal/Secondary Air System Workshop, Volume 1
NASA Technical Reports Server (NTRS)
2005-01-01
The 2004 NASA Seal/Secondary Air System workshop covered the following topics: (1) Overview of NASA s new Exploration Initiative program aimed at exploring the Moon, Mars, and beyond; (2) Overview of the NASA-sponsored Ultra-Efficient Engine Technology (UEET) program; (3) Overview of NASA Glenn s seal program aimed at developing advanced seals for NASA s turbomachinery, space, and reentry vehicle needs; (4) Reviews of NASA prime contractor and university advanced sealing concepts including tip clearance control, test results, experimental facilities, and numerical predictions; and (5) Reviews of material development programs relevant to advanced seals development. The NASA UEET overview illustrated for the reader the importance of advanced technologies, including seals, in meeting future turbine engine system efficiency and emission goals. For example, the NASA UEET program goals include an 8- to 15-percent reduction in fuel burn, a 15-percent reduction in CO2, a 70-percent reduction in NOx, CO, and unburned hydrocarbons, and a 30-dB noise reduction relative to program baselines. The workshop also covered several programs NASA is funding to develop technologies for the Exploration Initiative and advanced reusable space vehicle technologies. NASA plans on developing an advanced docking and berthing system that would permit any vehicle to dock to any on-orbit station or vehicle, as part of NASA s new Exploration Initiative. Plans to develop the necessary mechanism and androgynous seal technologies were reviewed. Seal challenges posed by reusable re-entry space vehicles include high-temperature operation, resiliency at temperature to accommodate gap changes during operation, and durability to meet mission requirements.
DEVELOPMENT OF A METHOD TO QUANTIFY THE IMPACT ...
Advances in human health risk assessment, especially for contaminants encountered by the inhalation route, have evolved so that the uncertainty factors (UF) used in the extrapolation of non-cancer effects across species (UFA) have been split into the respective pharmacodynamic (PD) and pharmacokinetic (PK) components. Present EPA default values for these components are divided into two half-logs (e.g., 10 to the 0.5 power or 3.16), so that their multiplication yields the 10-fold UF customarily seen in Agency risk assessments as UFA. The state of the science at present does not support a detailed evaluation of species-dependent and human interindividual variance of PD, but more data exist by which PK variance can be examined and quantified both across species and within the human species. Because metabolism accounts for much of the PK variance, we sought to examine the impact that differences in hepatic enzyme content exerts upon risk-relevant PK outcomes among humans. Because of the age and ethnic diversity expressed in the human organ donor population and the wide availability of tissues from these human organ donors, a program was developed to include information from those tissues in characterizing human interindividual PK variance. An Interagency Agreement with CDC/NIOSH Taft Laboratory, a Cooperative Agreement with CIIT Centers for Health Research, and a collaborative agreement with NHEERL/ETD were established to successfully complete the project. The di
Effect of ground skidding on oak advance regeneration
Jeffrey W. Stringer
2006-01-01
Vigorous advance regeneration is required to naturally regenerate oaks. However, a reduction in the number of advance regeneration stems from harvesting activities could be an important factor in determining successful oak regeneration. This study assessed the harvest survivability of advance regeneration of oak (Quercus spp.) and co-occurring...
Aasvang, E K; Werner, M U; Kehlet, H
2014-09-01
Deep pain complaints are more frequent than cutaneous in post-surgical patients, and a prevalent finding in quantitative sensory testing studies. However, the preferred assessment method - pressure algometry - is indirect and tissue unspecific, hindering advances in treatment and preventive strategies. Thus, there is a need for development of methods with direct stimulation of suspected hyperalgesic tissues to identify the peripheral origin of nociceptive input. We compared the reliability of an ultrasound-guided needle stimulation protocol of electrical detection and pain thresholds to pressure algometry, by performing identical test-retest sequences 10 days apart, in deep tissues in the groin region. Electrical stimulation was performed by five up-and-down staircase series of single impulses of 0.04 ms duration, starting from 0 mA in increments of 0.2 mA until a threshold was reached and descending until sensation was lost. Method reliability was assessed by Bland-Altman plots, descriptive statistics, coefficients of variance and intraclass correlation coefficients. The electrical stimulation method was comparable to pressure algometry regarding 10 days test-retest repeatability, but with superior same-day reliability for electrical stimulation (P < 0.05). Between-subject variance rather than within-subject variance was the main source for test variation. There were no systematic differences in electrical thresholds across tissues and locations (P > 0.05). The presented tissue-specific direct deep tissue electrical stimulation technique has equal or superior reliability compared with the indirect tissue-unspecific stimulation by pressure algometry. This method may facilitate advances in mechanism based preventive and treatment strategies in acute and chronic post-surgical pain states. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Yan, Fang; Bond, Tami C; Streets, David G
2014-12-16
This work evaluates the effectiveness of on-road primary particulate matter emission reductions that can be achieved by long-term vehicle scrappage and retrofit measures on regional and global levels. Scenario analysis shows that scrappage can provide significant emission reductions as soon as the measures begin, whereas retrofit provides greater emission reductions in later years, when more advanced technologies become available in most regions. Reductions are compared with a baseline that already accounts for implementation of clean vehicle standards. The greatest global emission reductions from a scrappage program occur 5 to 10 years after its introduction and can reach as much as 70%. The greatest reductions with retrofit occur around 2030 and range from 16-31%. Monte Carlo simulations are used to evaluate how uncertainties in the composition of the vehicle fleet affect predicted reductions. Scrappage and retrofit reduce global emissions by 22-60% and 15-31%, respectively, within 95% confidence intervals, under a midrange scenario in the year 2030. The simulations provide guidance about which strategies are most effective for specific regions. Retrofit is preferable for high-income regions. For regions where early emission standards are in place, scrappage is suggested, followed by retrofit after more advanced emission standards are introduced. The early implementation of advanced emission standards is recommended for Western and Eastern Africa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Fang; Bond, Tami C.; Streets, David G.
This work evaluates the effectiveness of on-road primary particulate matter emission reductions that can be achieved by long-term vehicle scrappage and retrofit measures on regional and global levels. Scenario analysis shows that scrappage can provide significant emission reductions as soon as the measures begin, whereas retrofit provides greater emission reductions in later years, when more advanced technologies become available in most regions. Reductions are compared with a baseline that already accounts for implementation of clean vehicle standards. The greatest global emission reductions from a scrappage program occur 5 to 10 years after its introduction and can reach as much asmore » 70%. The greatest reductions with retrofit occur around 2030 and range from 16-31%. Monte Carlo simulations are used to evaluate how uncertainties in the composition of the vehicle fleet affect predicted reductions. Scrappage and retrofit reduce global emissions by 22-60% and 15-31%, respectively, within 95% confidence intervals, under a midrange scenario in the year 2030. The simulations provide guidance about which strategies are most effective for specific regions. Retrofit is preferable for high-income regions. For regions where early emission standards are in place, scrappage is suggested, followed by retrofit after more advanced emission standards are introduced. The early implementation of advanced emission standards is recommended for Western and Eastern Africa« less
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
POLLUTION PREVENTION RESEARCH ONGOING - EPA'S RISK REDUCTION ENGINEERING LABORATORY
The mission of the Risk Reduction Engineering Laboratory is to advance the understanding, development and application of engineering solutions for the prevention or reduction of risks from environmental contamination. This mission is accomplished through basic and applied researc...
Least-squares dual characterization for ROI assessment in emission tomography
NASA Astrophysics Data System (ADS)
Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.
2013-06-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
Meta-analysis of the performance variation in broilers experimentally challenged by Eimeria spp.
Kipper, Marcos; Andretta, Ines; Lehnen, Cheila Roberta; Lovatto, Paulo Alberto; Monteiro, Silvia Gonzalez
2013-09-01
A meta-analysis was carried out to (1) study the relation of the variation in feed intake and weight gain in broilers infected with Eimeria acervulina, Eimeria maxima, Eimeria tenella, or a Pool of Eimeria species, and (2) to identify and to quantify the effects involved in the infection. A database of articles addressing the experimental infection with Coccidia in broilers was developed. These publications must present results of animal performance (weight gain, feed intake, and feed conversion ratio). The database was composed by 69 publications, totalling around 44 thousand animals. Meta-analysis followed three sequential analyses: graphical, correlation, and variance-covariance. The feed intake of the groups challenged by E. acervulina and E. tenella did not differ (P>0.05) to the control group. However, the feed intake in groups challenged by E. maxima and Pool showed an increase of 8% and 5% (P<0.05) in relation to the control group. Challenged groups presented a decrease (P<0.05) in weight gain compared with control groups. All challenged groups showed a reduction in weight gain, even when there was no reduction (P<0.05) in feed intake (adjustment through variance-covariance analysis). The feed intake variation in broilers infected with E. acervulina, E. maxima, E. tenella, or Pool showed a quadratic (P<0.05) influence over the variation in weight gain. In relation to the isolated effects, the challenges have an impact of less than 1% over the variance in feed intake and weight gain. However, the magnitude of the effects varied with Eimeria species, animal age, sex, and genetic line. In general the age effect is superior to the challenge effect, showing that age at the challenge is important to determine the impact of Eimeria infection. Copyright © 2013 Elsevier B.V. All rights reserved.
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
77 FR 43083 - Federal Acquisition Regulation; Information Collection; Advance Payments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
...; Information Collection; Advance Payments AGENCIES: Department of Defense (DOD), General Services... Paperwork Reduction Act, the Regulatory Secretariat will be submitting to the Office of Management and... requirement concerning advance payments. Public comments are particularly invited on: Whether this collection...
2011-01-01
Background Biologists studying adaptation under sexual selection have spent considerable effort assessing the relative importance of two groups of models, which hinge on the idea that females gain indirect benefits via mate discrimination. These are the good genes and genetic compatibility models. Quantitative genetic studies have advanced our understanding of these models by enabling assessment of whether the genetic architectures underlying focal phenotypes are congruent with either model. In this context, good genes models require underlying additive genetic variance, while compatibility models require non-additive variance. Currently, we know very little about how the expression of genotypes comprised of distinct parental haplotypes, or how levels and types of genetic variance underlying key phenotypes, change across environments. Such knowledge is important, however, because genotype-environment interactions can have major implications on the potential for evolutionary responses to selection. Results We used a full diallel breeding design to screen for complex genotype-environment interactions, and genetic architectures underlying key morphological traits, across two thermal environments (the lab standard 27°C, and the cooler 23°C) in the Australian field cricket, Teleogryllus oceanicus. In males, complex three-way interactions between sire and dam parental haplotypes and the rearing environment accounted for up to 23 per cent of the scaled phenotypic variance in the traits we measured (body mass, pronotum width and testes mass), and each trait harboured significant additive genetic variance in the standard temperature (27°C) only. In females, these three-way interactions were less important, with interactions between the paternal haplotype and rearing environment accounting for about ten per cent of the phenotypic variance (in body mass, pronotum width and ovary mass). Of the female traits measured, only ovary mass for crickets reared at the cooler temperature (23°C), exhibited significant levels of additive genetic variance. Conclusions Our results show that the genetics underlying phenotypic expression can be complex, context-dependent and different in each of the sexes. We discuss the implications of these results, particularly in terms of the evolutionary processes that hinge on good and compatible genes models. PMID:21791118
Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa
2005-03-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.
Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503
Forecasting of Radiation Belts: Results From the PROGRESS Project.
NASA Astrophysics Data System (ADS)
Balikhin, M. A.; Arber, T. D.; Ganushkina, N. Y.; Walker, S. N.
2017-12-01
Forecasting of Radiation Belts: Results from the PROGRESS Project. The overall goal of the PROGRESS project, funded in frame of EU Horizon2020 programme, is to combine first principles based models with the systems science methodologies to achieve reliable forecasts of the geo-space particle radiation environment.The PROGRESS incorporates three themes : The propagation of the solar wind to L1, Forecast of geomagnetic indices, and forecast of fluxes of energetic electrons within the magnetosphere. One of the important aspects of the PROGRESS project is the development of statistical wave models for magnetospheric waves that affect the dynamics of energetic electrons such as lower band chorus, hiss and equatorial noise. The error reduction ratio (ERR) concept has been used to optimise the set of solar wind and geomagnetic parameters for organisation of statistical wave models for these emissions. The resulting sets of parameters and statistical wave models will be presented and discussed. However the ERR analysis also indicates that the combination of solar wind and geomagnetic parameters accounts for only part of the variance of the emissions under investigation (lower band chorus, hiss and equatorial noise). In addition, advances in the forecast of fluxes of energetic electrons, exploiting empirical models and the first principles IMPTAM model achieved by the PROGRESS project is presented.
NASA Astrophysics Data System (ADS)
Li, Z.; Liu, P.; Feng, M.; Zhang, J.
2017-12-01
Based on the modeling of the water supply, power generation and environment (WPE) nexus by Feng et al. (2016), a refined theoretical model of competitive water consumption between human society and environment has been presented in this study, examining the role of technology advancement and social environmental awareness growth-induced pollution mitigation to the environment as a mechanism for the establishment and maintenance of the coexistence of both higher social water consumption and improved environment condition. By coupling environmental and social dynamics, both of which are represented by water consumption quantity, this study shows the possibility of sustainable situation of the social-environmental system when the benefit of technology offsets the side effect (pollution) of social development to the environment. Additionally, regime shifts could be triggered by gradually increased pollution rate, climate change-induced natural resources reduction and breakdown of the social environmental awareness. Therefore, in order to foresee the pending abrupt regime shifts of the system, early warning signals, including increasing variance and autocorrelation, have been examined when the system is undergoing stochastic disturbance. ADDIN EN.REFLIST Feng, M. et al., 2016. Modeling the nexus across water supply, power generation and environment systems using the system dynamics approach: Hehuang Region, China. J. Hydrol., 543: 344-359.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Control algorithms for dynamic attenuators.
Hsieh, Scott S; Pelc, Norbert J
2014-06-01
The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Aircraft Noise Reduction Subproject Overview
NASA Technical Reports Server (NTRS)
Fernandez, Hamilton; Nark, Douglas M.; Van Zante, Dale E.
2016-01-01
The material presents highlights of propulsion and airframe noise research being completed for the Advanced Air Transport Technology Project. The basis of noise reduction plans along with representative work for the airframe, propulsion, and propulsion-airframe integration is discussed for the Aircraft Noise reduction Subproject.
Goals and potential career advancement of licensed practical nurses in Japan.
Ikeda, Mari; Inoue, Katsuya; Kamibeppu, Kiyoko
2008-10-01
To investigate the effects of personal and professional variables on career advancement intentions of working Licensed Practical Nurses (LPNs). In Japan, two levels of professional nursing licensures, the LPN and the registered nurse (RN), are likely to be integrated in the future. Therefore, it is important to know the career advancement intentions of LPNs. Questionnaires were sent to a sample of 356 LPNs. Analysis of variance (anova) and discriminative analysis were used. We found that those who had a positive image of LPNs along with a positive image of RNs were identified as showing interest in career advancement. The results of anova showed that age had a negative effect; however, discriminative analysis suggested that age is not as significant compared with other variables. Our results indicate that the 'image of RNs', and 'role-acceptance factors' have an effect on career advancement intentions of LPNs. Our results suggest that Nursing Managers should create a supportive working environment where the LPN would feel encouraged to carry out the nursing role, thereby creating a positive image of nursing in general which would lead to career motivation and pursuing RN status.
Keating, Nancy L; Landrum, Mary Beth; Huskamp, Haiden A; Kouri, Elena M; Prigerson, Holly G; Schrag, Deborah; Maciejewski, Paul K; Hornbrook, Mark C; Haggstrom, David A
2016-08-01
Assess validity of the retrospective Dartmouth hospital referral region (HRR) end-of-life spending measures by comparing with health care expenditures from diagnosis to death for prospectively identified advanced lung cancer patients. We calculated health care spending from diagnosis (2003-2005) to death or through 2011 for 885 patients aged ≥65 years with advanced lung cancer using Medicare claims. We assessed the association between Dartmouth HRR-level spending in the last 2 years of life and patient-level spending using linear regression with random HRR effects, adjusting for patient characteristics. For each $1 increase in the Dartmouth metric, spending for our cohort increased by $0.74 (p < .001). The Dartmouth spending variable explained 93.4 percent of the HRR-level variance in observed spending. HRR-level spending estimates for deceased patient cohorts reflect area-level care intensity for prospectively identified advanced lung cancer patients. © Health Research and Educational Trust.
Factors influencing the quality of life of patients with advanced cancer.
Park, Sun-A; Chung, Seung Hyun; Lee, Youngjin
2017-02-01
The present study aimed to determine the predictors of quality of life (QOL) of patients with advanced cancer. A cross-sectional study involving 494 patients with advanced cancer was conducted using the Memorial Symptom Assessment Scale-Short Form, the Karnofsky Performance Status Scale, the World Health Organization Disability Assessment Schedule (Korean version), and the European Organization for Research and Treatment of Cancer Quality of Life Core 30. Regression analyses showed that physical and psychological symptoms significantly predicted the patients' QOL and explained 28.8% of the variance in QOL. Moreover, lack of energy was the patients' most prevalent symptom. The results of the present study will serve as fundamental data upon which the development of an intervention will be based so as to enhance the patients' QOL. Accordingly, an effective management of symptoms and performance maintenance should be considered in the future as key factors in providing support and establishing palliative care systems for patients with advanced cancer. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Arjunan, Sridhar P; Kumar, Dinesh K; Bastos, Teodiano
2012-01-01
This study has investigated the effect of age on the fractal based complexity measure of muscle activity and variance in the force of isometric muscle contraction. Surface electromyogram (sEMG) and force of muscle contraction were recorded from 40 healthy subjects categorized into: Group 1: Young - age range 20-30; 10 Males and 10 Females, Group 2: Old - age range 55-70; 10 Males and 10 Females during isometric exercise at Maximum Voluntary contraction (MVC). The results show that there is a reduction in the complexity of surface electromyogram (sEMG) associated with aging. The results demonstrate that there is an increase in the coefficient of variance (CoV) of the force of muscle contraction and a decrease in complexity of sEMG for the Old age group when compared with the Young age group.
Cargo/Logistics Airlift System Study (CLASS), Executive Summary
NASA Technical Reports Server (NTRS)
Norman, J. M.; Henderson, R. D.; Macey, F. C.; Tuttle, R. P.
1978-01-01
The current air cargo system is analyzed along with advanced air cargo systems studies. A forecast of advanced air cargo system demand is presented with cost estimates. It is concluded that there is a need for a dedicated advance air cargo system, and with application of advanced technology, reductions of 45% in air freight rates may be achieved.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Gross, Alden L; Rebok, George W; Unverzagt, Frederick W; Willis, Sherry L; Brandt, Jason
2011-09-01
The present study sought to predict changes in everyday functioning using cognitive tests. Data from the Advanced Cognitive Training for Independent and Vital Elderly trial were used to examine the extent to which competence in different cognitive domains--memory, inductive reasoning, processing speed, and global mental status--predicts prospectively measured everyday functioning among older adults. Coefficients of determination for baseline levels and trajectories of everyday functioning were estimated using parallel process latent growth models. Each cognitive domain independently predicts a significant proportion of the variance in baseline and trajectory change of everyday functioning, with inductive reasoning explaining the most variance (R2 = .175) in baseline functioning and memory explaining the most variance (R2 = .057) in changes in everyday functioning. Inductive reasoning is an important determinant of current everyday functioning in community-dwelling older adults, suggesting that successful performance in daily tasks is critically dependent on executive cognitive function. On the other hand, baseline memory function is more important in determining change over time in everyday functioning, suggesting that some participants with low baseline memory function may reflect a subgroup with incipient progressive neurologic disease.
DOT National Transportation Integrated Search
1997-01-01
This document reports on the formal evaluation of the targeted (limited but highly focused) deployment of the Advanced Driver and Vehicle Advisory Navigation ConcEpt (ADVANCE), an in-vehicle advanced traveler information system designed to provide sh...
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION ...
20. VIEW OF THE INTERIOR OF THE ADVANCED SIZE REDUCTION FACILITY USED TO CUT PLUTONIUM CONTAMINATED GLOVE BOXES AND MISCELLANEOUS LARGE EQUIPMENT DOWN TO AN EASILY PACKAGED SIZE FOR DISPOSAL. ROUTINE OPERATIONS WERE PERFORMED REMOTELY, USING HOISTS, MANIPULATOR ARMS, AND GLOVE PORTS TO REDUCE BOTH INTENSITY AND TIME OF RADIATION EXPOSURE TO THE OPERATOR. (11/6/86) - Rocky Flats Plant, Plutonium Fabrication, Central section of Plant, Golden, Jefferson County, CO
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Jones, Scott M.; Arcara, Philip C., Jr.; Haller, William J.
2004-01-01
NASA's Ultra Efficient Engine Technology (UEET) program features advanced aeropropulsion technologies that include highly loaded turbomachinery, an advanced low-NOx combustor, high-temperature materials, intelligent propulsion controls, aspirated seal technology, and an advanced computational fluid dynamics (CFD) design tool to help reduce airplane drag. A probabilistic system assessment is performed to evaluate the impact of these technologies on aircraft fuel burn and NOx reductions. A 300-passenger aircraft, with two 396-kN thrust (85,000-pound) engines is chosen for the study. The results show that a large subsonic aircraft equipped with the UEET technologies has a very high probability of meeting the UEET Program goals for fuel-burn (or equivalent CO2) reduction (15% from the baseline) and LTO (landing and takeoff) NOx reductions (70% relative to the 1996 International Civil Aviation Organization rule). These results are used to provide guidance for developing a robust UEET technology portfolio, and to prioritize the most promising technologies required to achieve UEET program goals for the fuel-burn and NOx reductions.
Lee, Jounghee; Park, Sohyun
2015-01-01
Objectives The sodium content of meals provided at worksite cafeterias is greater than the sodium content of restaurant meals and home meals. The objective of this study was to assess the relationships between sodium-reduction practices, barriers, and perceptions among food service personnel. Methods We implemented a cross-sectional study by collecting data on perceptions, practices, barriers, and needs regarding sodium-reduced meals at 17 worksite cafeterias in South Korea. We implemented Chi-square tests and analysis of variance for statistical analysis. For post hoc testing, we used Bonferroni tests; when variances were unequal, we used Dunnett T3 tests. Results This study involved 104 individuals employed at the worksite cafeterias, comprised of 35 men and 69 women. Most of the participants had relatively high levels of perception regarding the importance of sodium reduction (very important, 51.0%; moderately important, 27.9%). Sodium reduction practices were higher, but perceived barriers appeared to be lower in participants with high-level perception of sodium-reduced meal provision. The results of the needs assessment revealed that the participants wanted to have more active education programs targeting the general population. The biggest barriers to providing sodium-reduced meals were use of processed foods and limited methods of sodium-reduced cooking in worksite cafeterias. Conclusion To make the provision of sodium-reduced meals at worksite cafeterias more successful and sustainable, we suggest implementing more active education programs targeting the general population, developing sodium-reduced cooking methods, and developing sodium-reduced processed foods. PMID:27169011
Michael Hoppus; Stan Arner; Andrew Lister
2001-01-01
A reduction in variance for estimates of forest area and volume in the state of Connecticut was accomplished by stratifying FIA ground plots using raw, transformed and classified Landsat Thematic Mapper (TM) imagery. A US Geological Survey (USGS) Multi-Resolution Landscape Characterization (MRLC) vegetation cover map for Connecticut was used to produce a forest/non-...
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
Donald Rumsfeld, designated “U.S. Transportation Command as the single Department of Defense Distribution Process Owner (DPO)” (USTRANSCOM, 2004...paragraphs explain OptQuest’s 54 functionality and capabilities as described by Laguna (1997) and Glover et al. (1999) as well as the OptQuest for ARENA...throughout the solution space ( Glover et al., 1999). Heuristics are strategies (in this case algorithms) that use different techniques and available
Deconstructing Demand: The Anthropogenic and Climatic Drivers of Urban Water Consumption.
Hemati, Azadeh; Rippy, Megan A; Grant, Stanley B; Davis, Kristen; Feldman, David
2016-12-06
Cities in drought prone regions of the world such as South East Australia are faced with escalating water scarcity and security challenges. Here we use 72 years of urban water consumption data from Melbourne, Australia, a city that recently overcame a 12 year "Millennium Drought", to evaluate (1) the relative importance of climatic and anthropogenic drivers of urban water demand (using wavelet-based approaches) and (2) the relative contribution of various water saving strategies to demand reduction during the Millennium Drought. Our analysis points to conservation as a dominant driver of urban water savings (69%), followed by nonrevenue water reduction (e.g., reduced meter error and leaks in the potable distribution system; 29%), and potable substitution with alternative sources like rain or recycled water (3%). Per-capita consumption exhibited both climatic and anthropogenic signatures, with rainfall and temperature explaining approximately 55% of the variance. Anthropogenic controls were also strong (up to 45% variance explained). These controls were nonstationary and frequency-specific, with conservation measures like outdoor water restrictions impacting seasonal water use and technological innovation/changing social norms impacting lower frequency (baseline) use. The above-noted nonstationarity implies that wavelets, which do not assume stationarity, show promise for use in future predictive models of demand.
NASA Astrophysics Data System (ADS)
Masson, F.; Mouyen, M.; Hwang, C.; Wu, Y.-M.; Ponton, F.; Lehujeur, M.; Dorbath, C.
2012-11-01
Using a Bouguer anomaly map and a dense seismic data set, we have performed two studies in order to improve our knowledge of the deep structure of Taiwan. First, we model the Bouguer anomaly along a profile crossing the island using simple forward modelling. The modelling is 2D, with the hypothesis of cylindrical symmetry. Second we present a joint analysis of gravity anomaly and seismic arrival time data recorded in Taiwan. An initial velocity model has been obtained by local earthquake tomography (LET) of the seismological data. The LET velocity model was used to construct an initial 3D gravity model, using a linear velocity-density relationship (Birch's law). The synthetic Bouguer anomaly calculated for this model has the same shape and wavelength as the observed anomaly. However some characteristics of the anomaly map are not retrieved. To derive a crustal velocity/density model which accounts for both types of observations, we performed a sequential inversion of seismological and gravity data. The variance reduction of the arrival time data for the final sequential model was comparable to the variance reduction obtained by simple LET. Moreover, the sequential model explained about 80% of the observed gravity anomaly. New 3D model of Taiwan lithosphere is presented.
Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.
Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John
2008-02-01
A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.
ESO Advanced Data Products for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Retzlaff, J.; Delmotte, N.; Rite, C.; Rosati, P.; Slijkhuis, R.; Vandame, B.
2006-07-01
Advanced Data Products, that is, completely reduced, fully characterized science-ready data sets, play a crucial role for the success of the Virtual Observatory as a whole. We report on on-going work at ESO towards the creation and publication of Advanced Data Products in compliance with present VO standards on resource metadata. The new deep NIR multi-color mosaic of the GOODS/CDF-S region is used to showcase different aspects of the entire process: data reduction employing our MVM-based reduction pipeline, calibration and data characterization procedures, standardization of metadata content, and, finally, a prospect of the scientific potential illustrated by new results on deep galaxy number counts.
Strunk, H M; Henseler, J; Rauch, M; Mücke, M; Kukuk, G; Cuhls, H; Radbruch, L; Zhang, L; Schild, H H; Marinova, M
2016-07-01
Evaluation of ultrasound-guided high-intensity focused ultrasound (HIFU) used for the first time in Germany in patients with inoperable pancreatic cancer for reduction of tumor volume and relief of tumor-associated pain. 15 patients with locally advanced inoperable pancreatic cancer and tumor-related pain symptoms were treated by HIFU (n = 6 UICC stage III, n = 9 UICC stage IV). 13 patients underwent simultaneous standard chemotherapy. Ablation was performed using the JC HIFU system (Chongqing, China HAIFU Company) with an ultrasonic device for real-time imaging. Imaging follow-up (US, CT, MRI) and clinical assessment using validated questionnaires (NRS, BPI) was performed before and up to 15 months after HIFU. Despite biliary or duodenal stents (4/15) and encasement of visceral vessels (15/15), HIFU treatment was performed successfully in all patients. Treatment time and sonication time were 111 min and 1103 s, respectively. The applied total energy was 386 768 J. After HIFU ablation, contrast-enhanced imaging showed devascularization of treated tumor regions with a significant average volume reduction of 63.8 % after 3 months. Considerable pain relief was achieved in 12 patients after HIFU (complete or partial pain reduction in 6 patients). US-guided HIFU with a suitable acoustic pathway can be used for local tumor control and relief of tumor-associated pain in patients with locally advanced pancreatic cancer. • US-guided HIFU allows an additive treatment of unresectable pancreatic cancer.• HIFU can be used for tumor volume reduction.• Using HIFU, a significant reduction of cancer-related pain was achieved.• HIFU provides clinical benefit in patients with pancreatic cancer. Citation Format: • Strunk HM, Henseler J, Rauch M et al. Clinical Use of High-Intensity Focused Ultrasound (HIFU) for Tumor and Pain Reduction in Advanced Pancreatic Cancer. Fortschr Röntgenstr 2016; 188: 662 - 670. © Georg Thieme Verlag KG Stuttgart · New York.
Follow-On Technology Requirement Study for Advanced Subsonic Transport
NASA Technical Reports Server (NTRS)
Wendus, Bruce E.; Stark, Donald F.; Holler, Richard P.; Funkhouser, Merle E.
2003-01-01
A study was conducted to define and assess the critical or enabling technologies required for a year 2005 entry into service (EIS) engine for subsonic commercial aircraft, with NASA Advanced Subsonic Transport goals used as benchmarks. The year 2005 EIS advanced technology engine is an Advanced Ducted Propulsor (ADP) engine. Performance analysis showed that the ADP design offered many advantages compared to a baseline turbofan engine. An airplane/ engine simulation study using a long range quad aircraft quantified the effects of the ADP engine on the economics of typical airline operation. Results of the economic analysis show the ADP propulsion system provides a 6% reduction in direct operating cost plus interest, with half the reduction resulting from reduced fuel consumption. Critical and enabling technologies for the year 2005 EIS ADP were identified and prioritized.
How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?
Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J
2016-04-05
Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.
Variance of transionospheric VLF wave power absorption
NASA Astrophysics Data System (ADS)
Tao, X.; Bortnik, J.; Friedrich, M.
2010-07-01
To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.
Associations of gender inequality with child malnutrition and mortality across 96 countries.
Marphatia, A A; Cole, T J; Grijalva-Eternod, C; Wells, J C K
2016-01-01
National efforts to reduce low birth weight (LBW) and child malnutrition and mortality prioritise economic growth. However, this may be ineffective, while rising gross domestic product (GDP) also imposes health costs, such as obesity and non-communicable disease. There is a need to identify other potential routes for improving child health. We investigated associations of the Gender Inequality Index (GII), a national marker of women's disadvantages in reproductive health, empowerment and labour market participation, with the prevalence of LBW, child malnutrition (stunting and wasting) and mortality under 5 years in 96 countries, adjusting for national GDP. The GII displaced GDP as a predictor of LBW, explaining 36% of the variance. Independent of GDP, the GII explained 10% of the variance in wasting and stunting and 41% of the variance in child mortality. Simulations indicated that reducing GII could lead to major reductions in LBW, child malnutrition and mortality in low- and middle-income countries. Independent of national wealth, reducing women's disempowerment relative to men may reduce LBW and promote child nutritional status and survival. Longitudinal studies are now needed to evaluate the impact of efforts to reduce societal gender inequality.
Retrospective analysis of a detector fault for a full field digital mammography system
NASA Astrophysics Data System (ADS)
Marshall, N. W.
2006-11-01
This paper describes objective and subjective image quality measurements acquired as part of a routine quality assurance (QA) programme for an amorphous selenium (a-Se) full field digital mammography (FFDM) system between August-04 and February-05. During this period, the FFDM detector developed a fault and was replaced. A retrospective analysis of objective image quality parameters (modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE)) is presented to try and gain a deeper understanding of the detector problem that occurred. These measurements are discussed in conjunction with routine contrast-detail (c-d) results acquired with the CDMAM (Artinis, The Netherlands) test object. There was significant reduction in MTF over this period of time indicating an increase in blurring occurring within the a-Se converter layer. This blurring was not isotropic, being greater in the data line direction (left to right across the detector) than in the gate line direction (chest wall to nipple). The initial value of the 50% MTF point was 6 mm-1; for the faulty detector the 50% MTF points occurred at 3.4 mm-1 and 1.0 mm-1 in the gate line and data line directions, respectively. Prior to NNPS estimation, variance images were formed of the detector flat field images. Spatial distribution of variance was not uniform, suggesting that the physical blurring process was not constant across the detector. This change in variance with image position implied that the stationarity of the noise statistics within the image was limited and that care would be needed when performing objective measurements. The NNPS measurements confirmed the results found for the MTF, with a strong reduction in NNPS as a function of spatial frequency. This reduction was far more severe in the data line direction. A somewhat tentative DQE estimate was made; in the gate line direction there was little change in DQE up to 2.5 mm-1 but at the Nyquist frequency the DQE had fallen to approximately 35% of the original value. There was severe attenuation of DQE in the data line direction, the DQE falling to less than 0.01 above approximately 3.0 mm-1. C-d results showed an increase in threshold contrast of approximately 25% for details less than 0.2 mm in diameter, while no reduction in c-d performance was found at the largest detail diameters (1.0 mm and above). Despite the detector fault, the c-d curve was found to pass the European protocol acceptable c-d curve.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less
INTRODUCTION OF BIOMASS AS RENEWABLE ENERGY COMPONENT OF FUTURE TRANSPORTATION FUELS
The long-term objectives of new vehicle/fuel systems require the reduction of petroleum use, reduction of air pollution emissions, and reduction of greenhouse gas (GHG) emissions. In the near term, a major advancement toward these objectives will be made possible by the improved ...
Advances in the management of dyslipidemia.
Kampangkaew, June; Pickett, Stephen; Nambi, Vijay
2017-07-01
Cardiovascular disease is the leading cause of morbidity and mortality in the United States and therapies aimed at lipid modification are important for the reduction of cardiovascular risk. There have been many exciting advances in lipid management over the recent years. This review discusses these recent advances as well as the direction of future studies. Several recent clinical trials support low-density lipoprotein cholesterol (LDL-c) reduction beyond maximal statin therapy for improved cardiovascular outcomes. Ezetimibe reduced LDL-c beyond maximal statin therapy and was associated with improved cardiovascular outcomes for high-risk populations. Further LDL-c reduction may also be achieved with proprotein convertase subtilisin/kexin type-9 (PCSK9) inhibition and a recent trial, Further Cardiovascular Outcomes Research with PCSK9 Inhibition in Subjects with Elevated Risk (FOURIER), was the first to show reduction in cardiovascular events for evolocumab. Additional outcome studies of monoclonal antibody and RNA-targeted PCSK9 inhibitors are underway. Quantitative high-density lipoprotein cholesterol (HDL-c) improvements have failed to have clinical impact to date; most recently, cholesteryl ester transfer protein inhibitors and apolipoprotein infusions have demonstrated disappointing results. There are still ongoing trials in both of these areas, but some newer therapies are focusing on HDL functionality and not just the absolute HDL-c levels. There are several ongoing studies in triglyceride reduction including fatty acid therapy, inhibition of apolipoprotein C-3 or ANGTPL3 and peroxisome proliferator-activated receptor-α agonists. Lipid management continues to evolve and these advances have the potential to change clinical practice in the coming years.
Greer, Joseph A.; Pirl, William F.; Park, Elyse R.; Lynch, Thomas J.; Temel, Jennifer S.
2013-01-01
Objective Dose delays and reductions in chemotherapy due to hematologic toxicities are common among patients with advanced non-small-cell lung cancer (NSCLC). However, limited data exist on behavioral or psychological predictors of chemotherapy adherence. The goal of this study was to explore the frequency and clinical predictors of infusion dose delays and reductions in this patient population. Methods Fifty patients newly diagnosed with advanced NSCLC of high performance status (ECOG PS=0-1) completed baseline assessments on quality of life (FACT-L) and mood (HADS) within eight weeks of diagnosis. Participants were followed prospectively for six months. Chemotherapy dosing data came from medical chart review. Results All patients received chemotherapy during the course of the study, beginning with either a platinum-based doublet (74%), an oral epidermal growth factor receptor-tyrosine kinase inhibitor (14%), or a parenteral single agent (12%). Forty percent (N=20) of patients had either a dose delay (38%) and/or reduction (16%) in their scheduled infusions. Fisher’s exact tests showed that patients who experienced neutropenia, smoked at the time of diagnosis, or reported heightened baseline anxiety were significantly more likely to experience dose delays or reductions. There were no associations between chemotherapy adherence and patient demographics, performance status, or quality of life. Conclusion In this sample, over one-third of patients with advanced NSCLC experienced either a dose delay or reduction in prescribed chemotherapy regimens. Behavioral and psychological factors, such as tobacco use and anxiety symptoms, appear to play an important role in chemotherapy adherence, though further study is required to confirm these findings. PMID:19027443
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Discernible rhythm in the spatio/temporal distributions of transatlantic dust
NASA Astrophysics Data System (ADS)
Ben-Ami, Y.; Koren, I.; Altaratz, O.; Kostinski, A. B.; Lehahn, Y.
2011-08-01
The differences in North African dust emission regions and transport routes, between the boreal winter and summer are thoroughly documented. Here we re-examine the spatial and temporal characteristics of dust transport over the tropical and subtropical North Atlantic Ocean, using 10 years of satellite data, in order to determine better the different dust transport periods and their characteristics. We see a robust annual triplet: a discernible rhythm of "transatlantic dust weather". The proposed annual partition is composed of two heavy loading periods, associated here with a northern-route period and southern-route period, and one clean, light-loading period, accompanied by unusually low average optical depth of dust. The two dusty periods are quite different in character: their duration, transport routes, characteristic aerosol loading and frequency of pronounced dust episodes. The southern route period lasts about ~4 months, from the end of November to end of March. It is characterized by a relatively steady southern positioning, low frequency of dust events, low background values and high variance in dust loading. The northern-route period lasts ~6.5 months, from the end of March to mid October, and is associated with a steady drift northward of ~0.1 latitude day-1, reaching ~1500 km north of the southern route. The northern period is characterized by higher frequency of dust events, higher (and variable) background and smaller variance in dust loading. It is less episodic than the southern period. Transitions between the periods are brief. Separation between the southern and northern periods is marked by northward latitudinal shift in dust transport and by moderate reduction in the overall dust loading. The second transition between the northern and southern periods commences with an abrupt reduction in dust loading (thereby initiating the clean period) and rapid shift southward of ~0.2 latitude day-1, and 1300 km in total. These rates of northward advance and southern retreat of the dust transport route are in accordance with the simultaneous shift of the Inter Tropical Front. Based on cross-correlation analyses, we attribute the observed rhythm to the contrast between the northwestern and southern Saharan dust source spatial distributions. Despite the vast difference in areas, the Bodélé Depression, located in Chad, appears to modulate transatlantic dust patterns about half the time. The proposed partition captures the essence of transatlantic dust climatology and may, therefore, supply a natural temporal framework for dust analysis via models and observations.
Newman, Erika A; Nuchtern, Jed G
2016-10-01
Neuroblastoma is an embryonic cancer of neural crest cell lineage, accounting for up to 10% of all pediatric cancer. The clinical course is heterogeneous ranging from spontaneous regression in neonates to life-threatening metastatic disease in older children. Much of this clinical variance is thought to result from distinct pathologic characteristics that predict patient outcomes. Consequently, many research efforts have been focused on identifying the underlying biologic and genetic features of neuroblastoma tumors in order to more clearly define prognostic subgroups for treatment stratification. Recent technological advances have placed emphasis on the integration of genetic alterations and predictive biologic variables into targeted treatment approaches to improve patient survival outcomes. This review will focus on these recent advances and the implications they have on the diagnostic, staging, and treatment approaches in modern neuroblastoma clinical management. Copyright © 2016 Elsevier Inc. All rights reserved.
Advanced Rotorcraft Transmission (ART) program
NASA Technical Reports Server (NTRS)
Heath, Gregory F.; Bossler, Robert B., Jr.
1993-01-01
Work performed by the McDonnell Douglas Helicopter Company and Lucas Western, Inc. within the U.S. Army/NASA Advanced Rotorcraft Transmission (ART) Program is summarized. The design of a 5000 horsepower transmission for a next generation advanced attack helicopter is described. Government goals for the program were to define technology and detail design the ART to meet, as a minimum, a weight reduction of 25 percent, an internal noise reduction of 10 dB plus a mean-time-between-removal (MTBR) of 5000 hours compared to a state-of-the-art baseline transmission. The split-torque transmission developed using face gears achieved a 40 percent weight reduction, a 9.6 dB noise reduction and a 5270 hour MTBR in meeting or exceeding the above goals. Aircraft mission performance and cost improvements resulting from installation of the ART would include a 17 to 22 percent improvement in loss-exchange ratio during combat, a 22 percent improvement in mean-time-between-failure, a transmission acquisition cost savings of 23 percent of $165K, per unit, and an average transmission direct operating cost savings of 33 percent, or $24K per flight hour. Face gear tests performed successfully at NASA Lewis are summarized. Also, program results of advanced material tooth scoring tests, single tooth bending tests, Charpy impact energy tests, compact tension fracture toughness tests and tensile strength tests are summarized.
Alternative Fuels Data Center: Federal Laws and Incentives for Ethanol
advanced vehicles, fuel blends, fuel economy, hybrid vehicles, and idle reduction. Clean Cities provides advanced biofuel, which includes fuels derived from approved renewable biomass, excluding corn starch-based ethanol. Other advanced biofuels may include sugarcane-based fuels, renewable diesel co-processed with
78 FR 5449 - Federal Acquisition Regulation; Submission of OMB Review; Advance Payments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
...; Submission of OMB Review; Advance Payments AGENCIES: Department of Defense (DOD), General Services... Paperwork Reduction Act, the Regulatory Secretariat will be submitting to the Office of Management and... requirement concerning advance payments. A notice was published in the Federal Register at 77 FR 43083, on...
[Advances in microbial genome reduction and modification].
Wang, Jianli; Wang, Xiaoyuan
2013-08-01
Microbial genome reduction and modification are important strategies for constructing cellular chassis used for synthetic biology. This article summarized the essential genes and the methods to identify them in microorganisms, compared various strategies for microbial genome reduction, and analyzed the characteristics of some microorganisms with the minimized genome. This review shows the important role of genome reduction in constructing cellular chassis.
Usui, Takuji; Butchart, Stuart H M; Phillimore, Albert B
2017-03-01
There are wide reports of advances in the timing of spring migration of birds over time and in relation to rising temperatures, though phenological responses vary substantially within and among species. An understanding of the ecological, life-history and geographic variables that predict this intra- and interspecific variation can guide our projections of how populations and species are likely to respond to future climate change. Here, we conduct phylogenetic meta-analyses addressing slope estimates of the timing of avian spring migration regressed on (i) year and (ii) temperature, representing a total of 413 species across five continents. We take into account slope estimation error and examine phylogenetic, ecological and geographic predictors of intra- and interspecific variation. We confirm earlier findings that on average birds have significantly advanced their spring migration time by 2·1 days per decade and 1·2 days °C -1 . We find that over time and in response to warmer spring conditions, short-distance migrants have advanced spring migratory phenology by more than long-distance migrants. We also find that larger bodied species show greater advance over time compared to smaller bodied species. Our results did not reveal any evidence that interspecific variation in migration response is predictable on the basis of species' habitat or diet. We detected a substantial phylogenetic signal in migration time in response to both year and temperature, suggesting that some of the shifts in migratory phenological response to climate are predictable on the basis of phylogeny. However, we estimate high levels of species and spatial variance relative to phylogenetic variance, which is consistent with plasticity in response to climate evolving fairly rapidly and being more influenced by adaptation to current local climate than by common descent. On average, avian spring migration times have advanced over time and as spring has become warmer. While we are able to identify predictors that explain some of the true among-species variation in response, substantial intra- and interspecific variation in migratory response remains to be explained. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
32 CFR 165.7 - Waivers (including reductions).
Code of Federal Regulations, 2010 CFR
2010-07-01
... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...
32 CFR 165.7 - Waivers (including reductions).
Code of Federal Regulations, 2011 CFR
2011-07-01
... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...
32 CFR 165.7 - Waivers (including reductions).
Code of Federal Regulations, 2012 CFR
2012-07-01
... RECOUPMENT OF NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.7 Waivers (including reductions). (a) The “Arms... of nonrecurring cost of major defense equipment from foreign military sales customers but authorizes consideration of reductions or waivers for particular sales which, if made, significantly advance U.S...
Predicting Persuasion-Induced Behavior Change from the Brain
Falk, Emily B.; Berkman, Elliot T.; Mann, Traci; Harrison, Brittany; Lieberman, Matthew D.
2011-01-01
Although persuasive messages often alter people’s self-reported attitudes and intentions to perform behaviors, these self-reports do not necessarily predict behavior change. We demonstrate that neural responses to persuasive messages can predict variability in behavior change in the subsequent week. Specifically, an a priori region of interest (ROI) in medial prefrontal cortex (MPFC) was reliably associated with behavior change (r = 0.49, p < 0.05). Additionally, an iterative cross-validation approach using activity in this MPFC ROI predicted an average 23% of the variance in behavior change beyond the variance predicted by self-reported attitudes and intentions. Thus, neural signals can predict behavioral changes that are not predicted from self-reported attitudes and intentions alone. Additionally, this is the first functional magnetic resonance imaging study to demonstrate that a neural signal can predict complex real world behavior days in advance. PMID:20573889
Prediction of Cutting Force in Turning Process-an Experimental Approach
NASA Astrophysics Data System (ADS)
Thangarasu, S. K.; Shankar, S.; Thomas, A. Tony; Sridhar, G.
2018-02-01
This Paper deals with a prediction of Cutting forces in a turning process. The turning process with advanced cutting tool has a several advantages over grinding such as short cycle time, process flexibility, compatible surface roughness, high material removal rate and less environment problems without the use of cutting fluid. In this a full bridge dynamometer has been used to measure the cutting forces over mild steel work piece and cemented carbide insert tool for different combination of cutting speed, feed rate and depth of cut. The experiments are planned based on taguchi design and measured cutting forces were compared with the predicted forces in order to validate the feasibility of the proposed design. The percentage contribution of each process parameter had been analyzed using Analysis of Variance (ANOVA). Both the experimental results taken from the lathe tool dynamometer and the designed full bridge dynamometer were analyzed using Taguchi design of experiment and Analysis of Variance.
Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.
2009-02-01
A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.
Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries
NASA Astrophysics Data System (ADS)
De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.
This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.
Clark, Larkin; Wells, Martha H; Harris, Edward F; Lou, Jennifer
2016-01-01
To determine if aggressiveness of primary tooth preparation varied among different brands of zirconia and stainless steel (SSC) crowns. One hundred primary typodont teeth were divided into five groups (10 posterior and 10 anterior) and assigned to: Cheng Crowns (CC); EZ Pedo (EZP); Kinder Krowns (KKZ); NuSmile (NSZ); and SSC. Teeth were prepared, and assigned crowns were fitted. Teeth were weighed prior to and after preparation. Weight changes served as a surrogate measure of tooth reduction. Analysis of variance showed a significant difference in tooth reduction among brand/type for both the anterior and posterior. Tukey's honest significant difference test (HSD), when applied to anterior data, revealed that SSCs required significantly less tooth removal compared to the composite of the four zirconia brands, which showed no significant difference among them. Tukey's HSD test, applied to posterior data, revealed that CC required significantly greater removal of crown structure, while EZP, KKZ, and NSZ were statistically equivalent, and SSCs required significantly less removal. Zirconia crowns required more tooth reduction than stainless steel crowns for primary anterior and posterior teeth. Tooth reduction for anterior zirconia crowns was equivalent among brands. For posterior teeth, reduction for three brands (EZ Pedo, Kinder Krowns, NuSmile) did not differ, while Cheng Crowns required more reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-07-01
This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{submore » x} burners, advanced overfire systems, and digital control system.« less
The aluminum smelting process.
Kvande, Halvor
2014-05-01
This introduction to the industrial primary aluminum production process presents a short description of the electrolytic reduction technology, the history of aluminum, and the importance of this metal and its production process to modern society. Aluminum's special qualities have enabled advances in technologies coupled with energy and cost savings. Aircraft capabilities have been greatly enhanced, and increases in size and capacity are made possible by advances in aluminum technology. The metal's flexibility for shaping and extruding has led to architectural advances in energy-saving building construction. The high strength-to-weight ratio has meant a substantial reduction in energy consumption for trucks and other vehicles. The aluminum industry is therefore a pivotal one for ecological sustainability and strategic for technological development.
2014-01-01
This introduction to the industrial primary aluminum production process presents a short description of the electrolytic reduction technology, the history of aluminum, and the importance of this metal and its production process to modern society. Aluminum's special qualities have enabled advances in technologies coupled with energy and cost savings. Aircraft capabilities have been greatly enhanced, and increases in size and capacity are made possible by advances in aluminum technology. The metal's flexibility for shaping and extruding has led to architectural advances in energy-saving building construction. The high strength-to-weight ratio has meant a substantial reduction in energy consumption for trucks and other vehicles. The aluminum industry is therefore a pivotal one for ecological sustainability and strategic for technological development. PMID:24806722
Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction
NASA Astrophysics Data System (ADS)
Scarnati, Theresa; Gelb, Anne
2018-04-01
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.
Individual and population-level responses to ocean acidification.
Harvey, Ben P; McKeown, Niall J; Rastrick, Samuel P S; Bertolini, Camilla; Foggo, Andy; Graham, Helen; Hall-Spencer, Jason M; Milazzo, Marco; Shaw, Paul W; Small, Daniel P; Moore, Pippa J
2016-01-29
Ocean acidification is predicted to have detrimental effects on many marine organisms and ecological processes. Despite growing evidence for direct impacts on specific species, few studies have simultaneously considered the effects of ocean acidification on individuals (e.g. consequences for energy budgets and resource partitioning) and population level demographic processes. Here we show that ocean acidification increases energetic demands on gastropods resulting in altered energy allocation, i.e. reduced shell size but increased body mass. When scaled up to the population level, long-term exposure to ocean acidification altered population demography, with evidence of a reduction in the proportion of females in the population and genetic signatures of increased variance in reproductive success among individuals. Such increased variance enhances levels of short-term genetic drift which is predicted to inhibit adaptation. Our study indicates that even against a background of high gene flow, ocean acidification is driving individual- and population-level changes that will impact eco-evolutionary trajectories.
Method for simulating dose reduction in digital mammography using the Anscombe transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
NASA Technical Reports Server (NTRS)
Kirsch, Paul J.; Hayes, Jane; Zelinski, Lillian
2000-01-01
This special case study report presents the Science and Engineering Technical Assessments (SETA) team's findings for exploring the correlation between the underlying models of Advanced Risk Reduction Tool (ARRT) relative to how it identifies, estimates, and integrates Independent Verification & Validation (IV&V) activities. The special case study was conducted under the provisions of SETA Contract Task Order (CTO) 15 and the approved technical approach documented in the CTO-15 Modification #1 Task Project Plan.
Advance Noise Control Fan II: Test Rig Fan Risk Management Study
NASA Technical Reports Server (NTRS)
Lucero, John
2013-01-01
Since 1995 the Advanced Noise Control Fan (ANCF) has significantly contributed to the advancement of the understanding of the physics of fan tonal noise generation. The 9'x15' WT has successfully tested multiple high speed fan designs over the last several decades. This advanced several tone noise reduction concepts to higher TRL and the validation of fan tone noise prediction codes.
NASA's Space Launch System Advanced Booster Engineering Demonstration and/or Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; Dumbacher, Daniel L.; May, Todd A.
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters (SRBs) for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, with a stated intent to reduce risks leading to an affordable advanced booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the advanced boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the advanced boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit (BEO), opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable advanced booster that meets the SLS performance requirements. Demonstrations and/or risk reduction efforts were required to be related to a proposed booster concept directly applicable to fielding an advanced booster. This paper will discuss, for the first time publicly, the contract awards and how NASA intends to use the data from these efforts to prepare for the planned advanced booster DDT&E acquisition as the SLS Program moves forward with competitively procured affordable performance enhancements.
NASA's Space Launch System Advanced Booster Engineering Demonstration and Risk Reduction Efforts
NASA Technical Reports Server (NTRS)
Crumbly, Christopher M.; May, Todd; Dumbacher, Daniel
2012-01-01
The National Aeronautics and Space Administration (NASA) formally initiated the Space Launch System (SLS) development in September 2011, with the approval of the program s acquisition plan, which engages the current workforce and infrastructure to deliver an initial 70 metric ton (t) SLS capability in 2017, while using planned block upgrades to evolve to a full 130 t capability after 2021. A key component of the acquisition plan is a three-phased approach for the first stage boosters. The first phase is to complete the development of the Ares and Space Shuttle heritage 5-segment solid rocket boosters for initial exploration missions in 2017 and 2021. The second phase in the booster acquisition plan is the Advanced Booster Risk Reduction and/or Engineering Demonstration NASA Research Announcement (NRA), which was recently awarded after a full and open competition. The NRA was released to industry on February 9, 2012, and its stated intent was to reduce risks leading to an affordable Advanced Booster and to enable competition. The third and final phase will be a full and open competition for Design, Development, Test, and Evaluation (DDT&E) of the Advanced Boosters. There are no existing boosters that can meet the performance requirements for the 130 t class SLS. The expected thrust class of the Advanced Boosters is potentially double the current 5-segment solid rocket booster capability. These new boosters will enable the flexible path approach to space exploration beyond Earth orbit, opening up vast opportunities including near-Earth asteroids, Lagrange Points, and Mars. This evolved capability offers large volume for science missions and payloads, will be modular and flexible, and will be right-sized for mission requirements. NASA developed the Advanced Booster Engineering Demonstration and/or Risk Reduction NRA to seek industry participation in reducing risks leading to an affordable Advanced Booster that meets the SLS performance requirements. Demonstrations and/or risk reduction efforts were required to be related to a proposed booster concept directly applicable to fielding an Advanced Booster. This paper will discuss, for the first time publicly, the contract awards and how NASA intends to use the data from these efforts to prepare for the planned Advanced Booster DDT&E acquisition as the SLS Program moves forward with competitively procured affordable performance enhancements.
Hybrid Wing-Body (HWB) Pressurized Fuselage Modeling, Analysis, and Design for Weight Reduction
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2012-01-01
This paper describes the interim progress for an in-house study that is directed toward innovative structural analysis and design of next-generation advanced aircraft concepts, such as the Hybrid Wing-Body (HWB) and the Advanced Mobility Concept-X flight vehicles, for structural weight reduction and associated performance enhancement. Unlike the conventional, skin-stringer-frame construction for a cylindrical fuselage, the box-type pressurized fuselage panels in the HWB undergo significant deformation of the outer aerodynamic surfaces, which must be minimized without significant structural weight penalty. Simple beam and orthotropic plate theory is first considered for sizing, analytical verification, and possible equivalent-plate analysis with appropriate simplification. By designing advanced composite stiffened-shell configurations, significant weight reduction may be possible compared with the sandwich and ribbed-shell structural concepts that have been studied previously. The study involves independent analysis of the advanced composite structural concepts that are presently being developed by The Boeing Company for pressurized HWB flight vehicles. High-fidelity parametric finite-element models of test coupons, panels, and multibay fuselage sections, were developed for conducting design studies and identifying critical areas of potential failure. Interim results are discussed to assess the overall weight/strength advantages.
Moran, John L; Solomon, Patricia J
2011-02-01
Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.
Control of Jet Noise Through Mixing Enhancement
NASA Technical Reports Server (NTRS)
Bridges, James; Wernet, Mark; Brown, Cliff
2003-01-01
The idea of using mixing enhancement to reduce jet noise is not new. Lobed mixers have been around since shortly after jet noise became a problem. However, these designs were often a post-design fix that rarely was worth its weight and thrust loss from a system perspective. Recent advances in CFD and some inspired concepts involving chevrons have shown how mixing enhancement can be successfully employed in noise reduction by subtle manipulation of the nozzle geometry. At NASA Glenn Research Center, this recent success has provided an opportunity to explore our paradigms of jet noise understanding, prediction, and reduction. Recent advances in turbulence measurement technology for hot jets have also greatly aided our ability to explore the cause and effect relationships of nozzle geometry, plume turbulence, and acoustic far field. By studying the flow and sound fields of jets with various degrees of mixing enhancement and subsequent noise manipulation, we are able to explore our intuition regarding how jets make noise, test our prediction codes, and pursue advanced noise reduction concepts. The paper will cover some of the existing paradigms of jet noise as they relate to mixing enhancement for jet noise reduction, and present experimental and analytical observations that support these paradigms.
Sikorsky Aircraft Advanced Rotorcraft Transmission (ART) program
NASA Technical Reports Server (NTRS)
Kish, Jules G.
1993-01-01
The objectives of the Advanced Rotorcraft Transmission program were to achieve a 25 percent weight reduction, a 10 dB noise reduction, and a 5,000 hour mean time between removals (MTBR). A three engine Army Cargo Aircraft (ACA) of 85,000 pounds gross weight was used as the baseline. Preliminary designs were conducted of split path and split torque transmissions to evaluate weight, reliability, and noise. A split path gearbox was determined to be 23 percent lighter, greater than 10 dB quieter, and almost four times more reliable than the baseline two stage planetary design. Detail design studies were conducted of the chosen split path configuration, and drawings were produced of a 1/2 size gearbox consisting of a single engine path of the split path section. Fabrication and testing was then conducted on the 1/2 size gearbox. The 1/2 size gearbox testing proved that the concept of the split path gearbox with high reduction ratio double helical output gear was sound. The improvements were attributed to extensive use of composites, spring clutches, advanced high hot hardness gear steels, the split path configuration itself, high reduction ratio, double helical gearing on the output stage, elastomeric load sharing devices, and elimination of accessory drives.
Dual RAAS blockade is desirable in kidney disease: con.
Bakris, George L
2010-09-01
Dual renin-angiotensin aldosterone (RAAS) blockade is associated with higher risk of hyperkalemia and has not been shown, in any outcome trial of validated renal end points, that is, doubling of creatinine, time to dialysis, or death, to be superior over other approaches. It shows promise in advanced proteinuric nephropathy for additional proteinuria reduction. Whether this additional proteinuria reduction translates into meaningful outcomes of chronic kidney disease (CKD) is unknown, as proteinuria change is not a validated surrogate end point. Until we know the answer to this question, only those with very high levels of proteinuria should receive combination RAAS blocking therapy, and they need to be carefully monitored. Such individuals should be evaluated for risk of hyperkalemia and should consider use of a non-dihydropyridine calcium antagonist added to the single RAAS agent as an alternative for proteinuria reduction. This provides a safe and effective option for those patients with advanced nephropathic disease who need additional proteinuria reduction. In all cases other than advanced proteinuric nephropathy, there is no evidence of any positive CKD outcome with dual RAAS blockade. Thus, dual RAAS blockade cannot be recommended for all CKD patients.
NASA Technical Reports Server (NTRS)
Knip, G.; Plencner, R. M.; Eisenberg, J. D.
1980-01-01
The effects of engine configuration, advanced component technology, compressor pressure ratio and turbine rotor-inlet temperature on such figures of merit as vehicle gross weight, mission fuel, aircraft acquisition cost, operating, cost and life cycle cost are determined for three fixed- and two rotary-wing aircraft. Compared with a current production turboprop, an advanced technology (1988) engine results in a 23 percent decrease in specific fuel consumption. Depending on the figure of merit and the mission, turbine engine cost reductions required to achieve aircraft cost parity with a current spark ignition reciprocating (SIR) engine vary from 0 to 60 percent and from 6 to 74 percent with a hypothetical advanced SIR engine. Compared with a hypothetical turboshaft using currently available technology (1978), an advanced technology (1988) engine installed in a light twin-engine helicopter results in a 16 percent reduction in mission fuel and about 11 percent in most of the other figures of merit.
Noise impact of advanced high lift systems
NASA Technical Reports Server (NTRS)
Elmer, Kevin R.; Joshi, Mahendra C.
1995-01-01
The impact of advanced high lift systems on aircraft size, performance, direct operating cost and noise were evaluated for short-to-medium and medium-to-long range aircraft with high bypass ratio and very high bypass ratio engines. The benefit of advanced high lift systems in reducing noise was found to be less than 1 effective-perceived-noise decibel level (EPNdB) when the aircraft were sized to minimize takeoff gross weight. These aircraft did, however, have smaller wings and lower engine thrusts for the same mission than aircraft with conventional high lift systems. When the advanced high lift system was implemented without reducing wing size and simultaneously using lower flap angles that provide higher L/D at approach a cumulative noise reduction of as much as 4 EPNdB was obtained. Comparison of aircraft configurations that have similar approach speeds showed cumulative noise reduction of 2.6 EPNdB that is purely the result of incorporating advanced high lift system in the aircraft design.
Advanced Noise Control Fan (ANCF)
2014-01-15
The Advanced Noise Control Fan shown here is located in NASA Glenn’s Aero-Acoustic Propulsion Laboratory. The 4-foot diameter fan is used to evaluate innovate aircraft engine noise reduction concepts less expensively and more quickly.
Olson, K; Rogers, W T; Cui, Y; Cree, M; Baracos, V; Rust, T; Mellott, I; Johnson, L; Macmillan, K; Bonville, N
2011-08-01
We have proposed that declines in adaptive capacity, defined as the ability to adapt to multiple stressors, may serve as an indicator of risk for fatigue. A comprehensive measure of adaptive capacity does not exist. In this paper we describe construction of an instrument to measure adaptive capacity, the Adaptive Capacity Index (ACI). Descriptive and psychometric. Six sites providing palliative care in Western Canada. ≥18 years old, diagnosed with advanced cancer, able to read and write English, Mini-Mental Status Exam score ≥22. Pilot study n=48; Main study n=225 stratified using the Edmonton Symptom Assessment Scale (ESAS) tiredness score (≥0 to ≤2 n=60; ≥3 to ≤6 n=108; ≥7 and ≤10 n=57). Following ethics approval, 17 experts in symptom management assisted with content validation and consenting individuals completed the Functional Assessment of Cancer Therapy-Fatigue (FACT-F), the Profile of Mood States-Vigor short form (POMS-Vsf), and the ACI. A research assistant collected demographic information and assigned an Eastern Cooperative Oncology Group (ECOG) score. Data were analyzed using descriptive and inferential statistics (i.e., exploratory factor analyses, correlation, multivariate analyses of variance, and multiple regression). Five 6-item ACI factors/subscales (Cognitive Function, Stamina/Muscle Endurance, Sleep Quality, Emotional Reactivity, and Social Interaction) were identified. The ACI-total scale and its subscales were internally consistent (Cronbach's alpha 0.76-0.89), and were significantly correlated with each other, and with each fatigue measure (Pearson's r ranging from -0.724 to 0.634). The ACI total score was sensitive to changes in the ESAS tiredness score. Stamina/Muscle Endurance, Cognitive Function, and Sleep Quality predicted 60.8% of the variance in FACT-F. Stamina/Muscle Endurance and Social Interaction predicted 36.8% of the variance in POMS-Vsf. Stamina/Muscle Endurance and Sleep Quality predicted 8% of the variance in ECOG. The ACI is reliable and has beginning evidence of validity. In future studies we will examine relationships between ACI subscale scores and subsequent increases in fatigue and explore linkages to physiological processes. We will also establish ACI norms for early and late stage cancers and explore variations in ACI subscale scores base on age or gender. Copyright © 2011 Elsevier Ltd. All rights reserved.
User Interface for the ESO Advanced Data Products Image Reduction Pipeline
NASA Astrophysics Data System (ADS)
Rité, C.; Delmotte, N.; Retzlaff, J.; Rosati, P.; Slijkhuis, R.; Vandame, B.
2006-07-01
The poster presents a friendly user interface for image reduction, totally written in Python and developed by the Advanced Data Products (ADP) group. The interface is a front-end to the ESO/MVM image reduction package, originally developed in the ESO Imaging Survey (EIS) project and used currently to reduce imaging data from several instruments such as WFI, ISAAC, SOFI and FORS1. As part of its scope, the interface produces high-level, VO-compliant, science images from raw data providing the astronomer with a complete monitoring system during the reduction, computing also statistical image properties for data quality assessment. The interface is meant to be used for VO services and it is free but un-maintained software and the intention of the authors is to share code and experience. The poster describes the interface architecture and current capabilities and give a description of the ESO/MVM engine for image reduction. The ESO/MVM engine should be released by the end of this year.
Recent Progress in Engine Noise Reduction Technologies
NASA Technical Reports Server (NTRS)
Huff, Dennis; Gliebe, Philip
2003-01-01
Highlights from NASA-funded research over the past ten years for aircraft engine noise reduction are presented showing overall technical plans, accomplishments, and selected applications to turbofan engines. The work was sponsored by NASA's Advanced Subsonic Technology (AST) Noise Reduction Program. Emphasis is given to only the engine noise reduction research and significant accomplishments that were investigated at Technology Readiness Levels ranging from 4 to 6. The Engine Noise Reduction sub-element was divided into four work areas: source noise prediction, model scale tests, engine validation, and active noise control. Highlights from each area include technologies for higher bypass ratio turbofans, scarf inlets, forward-swept fans, swept and leaned stators, chevron/tabbed nozzles, advanced noise prediction analyses, and active noise control for fans. Finally, an industry perspective is given from General Electric Aircraft Engines showing how these technologies are being applied to commercial products. This publication contains only presentation vu-graphs from an invited lecture given at the 41st AIAA Aerospace Sciences Meeting, January 6-9, 2003.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-01-01
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818
Production of oxygen from lunar ilmenite
NASA Technical Reports Server (NTRS)
Zhao, Y.; Shadman, F.
1990-01-01
The following subjects are addressed: (1) the mechanism and kinetics of carbothermal reduction of simulated lunar ilmenite using carbon and, particularly, CO as reducing agents; (2) the determination of the rate-limiting steps; (3) the investigation of the effect of impurities, particularly magnesium; (4) the search for catalysts suitable for enhancement of the rate-limiting step; (5) the comparison of the kinetics of carbothermal reduction with those of hydrogen reduction; (6) the study of the combined use of CO and hydrogen as products of gasification of carbonaceous solids; (7) the development of reduction methods based on the use of waste carbonaceous compounds for the process; (8) the development of a carbothermal reaction path that utilizes gasification of carbonaceous solids to reducing gaseous species (hydrocarbons and carbon monoxide) to facilitate the reduction reaction kinetics and make the process more flexible in using various forms of carbonaceous feeds; (9) the development of advanced gas separation techniques, including the use of high-temperature ceramic membranes; (10) the development of an optimum process flow sheet for carbothermal reduction, and comparison of this process with the hydrogen reduction scheme, as well as a general comparison with other leading oxygen production schemes; and (11) the use of new and advanced material processing and separation techniques.
Comparison of Variance-to-Mean Ratio Methods for Reparables Inventory Management
2006-03-01
for Recoverable Items in the ALS [Advanced Logistics System] Marginal Analysis Algorithms”. Marginal analysis is a microeconomics technique used...in the Demands Workbook . The quantitative expected backorder and aircraft availability percentage result. Each of the 30 simulations is run five...10A, B-2A, C-17A and F-15E aircraft. The data was selected from D200A’s Ddb04 tables and flying hour programs respectively. The two workbook (OIM
NASA Astrophysics Data System (ADS)
Gamm, Ute A.; Huang, Brendan K.; Mis, Emily K.; Khokha, Mustafa K.; Choma, Michael A.
2017-04-01
Mucociliary flow is an important defense mechanism in the lung to remove inhaled pathogens and pollutants. A disruption of ciliary flow can lead to respiratory infections. Even though patients in the intensive care unit (ICU) either have or are very susceptible to respiratory infections, mucociliary flow is not well understood in the ICU setting. We recently demonstrated that hyperoxia, a consequence of administering supplemental oxygen to a patient in respiratory failure, can lead to a significant reduction of cilia-driven fluid flow in mouse trachea. There are other factors that are relevant to ICU medicine that can damage the ciliated tracheal epithelium, including inhalation injury and endotracheal tube placement. In this study we use two animal models, Xenopus embryo and ex vivo mouse trachea, to analyze flow defects in the injured ciliated epithelium. Injury is generated either mechanically with a scalpel or chemically by calcium chloride (CaCl2) shock, which efficiently but reversibly deciliates the embryo skin. In this study we used optical coherence tomography (OCT) and particle tracking velocimetry (PTV) to quantify cilia driven fluid flow over the surface of the Xenopus embryo. We additionally visualized damage to the ciliated epithelium by capturing 3D speckle variance images that highlight beating cilia. Mechanical injury disrupted cilia-driven fluid flow over the injured site, which led to a reduction in cilia-driven fluid flow over the whole surface of the embryo (n=7). The calcium chloride shock protocol proved to be highly effective in deciliating embryos (n=6). 3D speckle variance images visualized a loss of cilia and cilia-driven flow was halted immediately after application. We also applied CaCl2-shock to cultured ex vivo mouse trachea (n=8) and found, similarly to effects in Xenopus embryo, an extensive loss of cilia with resulting cessation of flow. We investigated the regeneration of the ciliated epithelium after an 8 day incubation period, and found that cilia had regrown and flow was completely restored. In conclusion, OCT is a valuable tool to visualize injury of the ciliated epithelium and to quantify reduction of generated flow. This method allows for systematic investigation of focal and diffuse injury of the ciliated epithelium and the assessment of mechanisms to compensate for loss of flow.
Hydrocortisone Cream to Reduce Perineal Pain after Vaginal Birth: A Randomized Controlled Trial.
Manfre, Margaret; Adams, Donita; Callahan, Gloria; Gould, Patricia; Lang, Susan; McCubbins, Holly; Mintz, Amy; Williams, Sommer; Bishard, Mark; Dempsey, Amy; Chulay, Marianne
2015-01-01
To determine if the use of hydrocortisone cream decreases perineal pain in the immediate postpartum period. This was a randomized controlled trial (RCT), crossover study design, with each participant serving as their own control. Participants received three different methods for perineal pain management at three sequential perineal pain treatments after birth: two topical creams (corticosteroid; placebo) and a control treatment (no cream application). Treatment order was randomly assigned, with participants and investigators blinded to cream type. The primary dependent variable was the change in perineal pain levels (posttest minus pretest pain levels) immediately before and 30 to 60 minutes after perineal pain treatments. Data were analyzed with analysis of variance, with p < 0.05 considered significant. A total of 27 participants completed all three perineal pain treatments over a 12-hour period. A reduction in pain was found after application of both the topical creams, with average perineal pain change scores of -4.8 ± 8.4 mm after treatment with hydrocortisone cream (N = 27) and -6.7 ± 13.0 mm after treatment with the placebo cream (N = 27). Changes in pain scores with no cream application were 1.2 ± 10.5 mm (N = 27). Analysis of variance found a significant difference between treatment groups (F2,89 = 3.6, p = 0.03), with both cream treatments having significantly better pain reduction than the control, no cream treatment (hydrocortisone vs. no cream, p = 0.04; placebo cream vs. no cream, p = 0.01). There were no differences in perineal pain reduction between the two cream treatments (p = .54). This RCT found that the application of either hydrocortisone cream or placebo cream provided significantly better pain relief than no cream application.
Ribeiro, Daniel Cury; de Castro, Marcelo Peduzzi; Sole, Gisela; Vicenzino, Bill
2016-04-01
Manual therapy enhances pain-free range of motion and reduces pain levels, but its effect on shoulder muscle activity is unclear. This study aimed to assess the effects of a sustained glenohumeral postero-lateral glide during elevation on shoulder muscle activity. Thirty asymptomatic individuals participated in a repeated measures study of the electromyographic activity of the supraspinatus, infraspinatus, posterior deltoid, and middle deltoid. Participants performed four sets of 10 repetitions of shoulder scaption and abduction with and without a glide of the glenohumeral joint. Repeated-measures multivariate analysis of variance (MANOVA) was used to assess the effects of movement direction (scaption and abduction), and condition (with and without glide) (within-subject factors) on activity level of each muscle (dependent variables). Significant MANOVAs were followed-up with repeated-measures one-way analysis of variance. During shoulder scaption with glide, the supraspinatus showed a reduction of 4.1% maximal isometric voluntary contraction (MVIC) (95% CI 2.4, 5.8); and infraspinatus 1.3% MVIC (95% CI 0.5, 2.1). During shoulder abduction with a glide, supraspinatus presented a reduction of 2.5% MVIC (95% CI 1.1, 4.0), infraspinatus 2.1% MVIC (95% CI 1.0, 3.2), middle deltoid 2.2% MVIC (95% CI = 0.4, 4.1), posterior deltoid 2.1% MVIC (95% CI 1.3, 2.8). In asymptomatic individuals, sustained glide reduced shoulder muscle activity compared to control conditions. This might be useful in enhancing shoulder movement in clinical populations. Reductions in muscle activity might result from altered joint mechanics, including simply helping to lift the arm, and/or through changing afferent sensory input about the shoulder. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester
2015-02-01
Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. © 2015 International Society for Advancement of Cytometry.
Möldner, Meike; Unglaub, Frank; Hahn, Peter; Müller, Lars P; Bruckner, Thomas; Spies, Christian K
2015-02-01
To investigate functional and subjective outcome parameters after arthroscopic debridement of central articular disc lesions (Palmer type 2C) and to correlate these findings with ulna length. Fifty patients (15 men; 35 women; mean age, 47 y) with Palmer type 2C lesions underwent arthroscopic debridement. Nine of these patients (3 men; 6 women; mean static ulnar variance, 2.4 mm; SD, 0.5 mm) later underwent ulnar shortening osteotomy because of persistent pain and had a mean follow-up of 36 months. Mean follow-up was 38 months for patients with debridement only (mean static ulnar variance, 0.5 mm; SD, 1.2 mm). Examination parameters included range of motion, grip and pinch strengths, pain (visual analog scale), and functional outcome scores (Modified Mayo Wrist score [MMWS] and Disabilities of the Arm, Shoulder, and Hand [DASH] questionnaire). Patients who had debridement only reached a DASH questionnaire score of 18 and an MMWS of 89 with significant pain reduction from 7.6 to 2.0 on the visual analog scale. Patients with additional ulnar shortening reached a DASH questionnaire score of 18 and an MMWS of 88, with significant pain reduction from 7.4 to 2.5. Neither surgical treatment compromised grip and pinch strength in comparison with the contralateral side. We identified 1.8 mm or more of positive ulnar variance as an indication for early ulnar shortening in the case of persistent ulnar-sided wrist pain after arthroscopic debridement. Arthroscopic debridement was a sufficient and reliable treatment option for the majority of patients with Palmer type 2C lesions. Because reliable predictors of the necessity for ulnar shortening are lacking, we recommend arthroscopic debridement as a first-line treatment for all triangular fibrocartilage 2C lesions, and, in the presence of persistent ulnar-sided wrist pain, ulnar shortening osteotomy after an interval of 6 months. Ulnar shortening proved to be sufficient and safe for these patients. Patients with persistent ulnar-sided wrist pain after debridement who had preoperative static positive ulnar variance of 1.8 mm or more may be treated by ulnar shortening earlier in order to spare them prolonged symptoms. Therapeutic IV. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
Individual differences in personality traits reflect structural variance in specific brain regions.
Gardini, Simona; Cloninger, C Robert; Venneri, Annalena
2009-06-30
Personality dimensions such as novelty seeking (NS), harm avoidance (HA), reward dependence (RD) and persistence (PER) are said to be heritable, stable across time and dependent on genetic and neurobiological factors. Recently a better understanding of the relationship between personality traits and brain structures/systems has become possible due to advances in neuroimaging techniques. This Magnetic Resonance Imaging (MRI) study investigated if individual differences in these personality traits reflected structural variance in specific brain regions. A large sample of eighty five young adult participants completed the Three-dimensional Personality Questionnaire (TPQ) and had their brain imaged with MRI. A voxel-based correlation analysis was carried out between individuals' personality trait scores and grey matter volume values extracted from 3D brain scans. NS correlated positively with grey matter volume in frontal and posterior cingulate regions. HA showed a negative correlation with grey matter volume in orbito-frontal, occipital and parietal structures. RD was negatively correlated with grey matter volume in the caudate nucleus and in the rectal frontal gyrus. PER showed a positive correlation with grey matter volume in the precuneus, paracentral lobule and parahippocampal gyrus. These results indicate that individual differences in the main personality dimensions of NS, HA, RD and PER, may reflect structural variance in specific brain areas.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A
2017-05-01
Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.
Advanced Technology Display House. Volume 2: Energy system design concepts
NASA Technical Reports Server (NTRS)
Maund, D. H.
1981-01-01
The preliminary design concept for the energy systems in the Advanced Technology Display House is analyzed. Residential energy demand, energy conservation, and energy concepts are included. Photovoltaic arrays and REDOX (reduction oxidation) sizes are discussed.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Furgang, David; Sreenivasan, Prem K; Zhang, Yun Po; Fine, Daniel H; Cummins, Diane
2003-09-01
This investigation examined the in vitro and ex vivo antimicrobial effects of a new dentifrice, Colgate Total Advanced Fresh, formulated with triclosan/copolymer/sodium fluoride, on oral bacteria, including those odorigenic bacteria implicated in bad breath. The effects of Colgate Total Advanced Fresh were compared to commercially available fluoride dentifrices that served as controls. Three experimental approaches were undertaken for these studies. In the first approach, the dentifrice formulations were tested in vitro against 13 species of oral bacteria implicated in bad breath. The second approach examined the antimicrobial activity derived from dentifrice that was adsorbed to and released from hydroxyapatite disks. In this approach, dentifrice-treated hydroxyapatite disks were immersed in a suspension of bacteria, and reduction in bacterial viability from the release of bioactive agents from hydroxyapatite was determined. The third approach examined the effect of treating bacteria immediately after their removal from the oral cavity of 11 adult human volunteers. This ex vivo study examined the viability of cultivable oral bacteria after dentifrice treatment for 2 minutes. Antimicrobial effects were determined by plating Colgate Total Advanced Fresh and control-dentifrice-treated samples on enriched media (for all cultivable oral bacteria) and indicator media (for hydrogen-sulfide-producing organisms), respectively. Results indicated that the antimicrobial effects of Colgate Total Advanced Fresh were significantly greater than either of the other dentifrices for all 13 oral odorigenic bacterial strains tested in vitro (P < or = 0.05). In the second approach, Colgate Total Advanced Fresh-treated hydroxyapatite disks were significantly more active in reducing bacterial growth than the other dentifrices tested (P < or = 0.05). Finally, ex vivo treatment of oral bacteria with Colgate Total Advanced Fresh demonstrated a 90.9% reduction of all oral cultivable bacteria and a 91.5% reduction of oral bacteria producing hydrogen sulfide compared with the control dentifrice. In conclusion, these results, taken together with the significant reductions in clinical malodor scores by Colgate Total Advanced Fresh demonstrated in organoleptic studies, strongly suggest that this dentifrice kills the bacteria that are implicated in the cause of bad breath.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uresk, D.W.; Gilbert, R.O.; Rickard, W.H.
Big sagebrush (Artemisia tridentata) was subjected to a double sampling procedure to obtain reliable phytomass estimates for leaves, flowering stalks, live wood, dead wood, various combinations of the preceeding, and total phytomass. Coefficients of determination (R/sup 2/) between the independent variable and various phytomass categories ranged from 0.45 to 0.93. Total phytomass was approximately 69 +- 16 (+- S.E.) g/m/sup 2/. Reductions in the variance of the phytomass estimates ranged from 33 percent to 80 percent using double sampling assuming optimum allocation. (auth)
Tactical Implications of Air Blast Variations from Nuclear Tests
1976-11-30
work com- pleted under Contract ODlA 001-76-C-0284. The objective of this analysis was to assess the rationale for additional underground tests ( UGT ) to...applications wore based, and additional applications of the methodology for a more complete assessment of the UGT rationale. This report summarizes work...corresponding to a 25 percent to 50 percent reduction in yield. The maximum improvement possible through UGT is, of course, when the variance in the weapon
NASA Astrophysics Data System (ADS)
El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.
2015-10-01
A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.
A pediatric correlational study of stride interval dynamics, energy expenditure and activity level.
Ellis, Denine; Sejdic, Ervin; Zabjek, Karl; Chau, Tom
2014-08-01
The strength of time-dependent correlations known as stride interval (SI) dynamics has been proposed as an indicator of neurologically healthy gait. Most recently, it has been hypothesized that these dynamics may be necessary for gait efficiency although the supporting evidence to date is scant. The current study examines over-ground SI dynamics, and their relationship with the cost of walking and physical activity levels in neurologically healthy children aged nine to 15 years. Twenty participants completed a single experimental session consisting of three phases: 10 min resting, 15 min walking and 10 min recovery. The scaling exponent (α) was used to characterize SI dynamics while net energy cost was measured using a portable metabolic cart, and physical activity levels were determined based on a 7-day recall questionnaire. No significant linear relationships were found between a and the net energy cost measures (r < .07; p > .25) or between α and physical activity levels (r = .01, p = .62). However, there was a marked reduction in the variance of α as activity levels increased. Over-ground stride dynamics do not appear to directly reflect energy conservation of gait in neurologically healthy youth. However, the reduction in the variance of α with increasing physical activity suggests a potential exercise-moderated convergence toward a level of stride interval persistence for able-bodied youth reported in the literature. This latter finding warrants further investigation.
Rasper, Michael; Nadjiri, Jonathan; Sträter, Alexandra S; Settles, Marcus; Laugwitz, Karl-Ludwig; Rummeny, Ernst J; Huber, Armin M
2017-06-01
To prospectively compare image quality and myocardial T 1 relaxation times of modified Look-Locker inversion recovery (MOLLI) imaging at 3.0 T (T) acquired with patient-adaptive dual-source (DS) and conventional single-source (SS) radiofrequency (RF) transmission. Pre- and post-contrast MOLLI T 1 mapping using SS and DS was acquired in 27 patients. Patient wise and segment wise analysis of T 1 times was performed. The correlation of DS MOLLI measurements with a reference spin echo sequence was analysed in phantom experiments. DS MOLLI imaging reduced T 1 standard deviation in 14 out of 16 myocardial segments (87.5%). Significant reduction of T 1 variance could be obtained in 7 segments (43.8%). DS significantly reduced myocardial T 1 variance in 16 out of 25 patients (64.0%). With conventional RF transmission, dielectric shading artefacts occurred in six patients causing diagnostic uncertainty. No according artefacts were found on DS images. DS image findings were in accordance with conventional T 1 mapping and late gadolinium enhancement (LGE) imaging. Phantom experiments demonstrated good correlation of myocardial T 1 time between DS MOLLI and spin echo imaging. Dual-source RF transmission enhances myocardial T 1 homogeneity in MOLLI imaging at 3.0 T. The reduction of signal inhomogeneities and artefacts due to dielectric shading is likely to enhance diagnostic confidence.
Hamonts, Kelly; Ryngaert, Annemie; Smidt, Hauke; Springael, Dirk; Dejonghe, Winnie
2014-03-01
Chlorinated aliphatic hydrocarbons (CAHs) often discharge into rivers as contaminated groundwater baseflow. As biotransformation of CAHs in the impacted river sediments might be an effective remediation strategy, we investigated the determinants of the microbial community structure of eutrophic, CAH-polluted sediments of the Zenne River. Based on PCR-DGGE analysis, a high diversity of Bacteria, sulfate-reducing bacteria, Geobacteraceae, methanogenic archaea, and CAH-respiring Dehalococcoides was found. Depth in the riverbed, organic carbon content, CAH content and texture of the sediment, pore water temperature and conductivity, and concentrations of toluene and methane significantly contributed to the variance in the microbial community structure. On a meter scale, CAH concentrations alone explained only 6% of the variance in the Dehalococcoides and sulfate-reducing communities. On a cm-scale, however, CAHs explained 14.5-35% of the variation in DGGE profiles of Geobacteraceae, methanogens, sulfate-reducing bacteria, and Bacteria, while organic carbon content explained 2-14%. Neither the presence of the CAH reductive dehalogenase genes tceA, bvcA, and vcrA, nor the community structure of the targeted groups significantly differed between riverbed locations showing either no attenuation or reductive dechlorination, indicating that the microbial community composition was not a limiting factor for biotransformation in the Zenne sediments. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
Allen, Scott L; McGuigan, Katrina; Connallon, Tim; Blows, Mark W; Chenoweth, Stephen F
2017-10-01
A proposed benefit to sexual selection is that it promotes purging of deleterious mutations from populations. For this benefit to be realized, sexual selection, which is usually stronger on males, must purge mutations deleterious to both sexes. Here, we experimentally test the hypothesis that sexual selection on males purges deleterious mutations that affect both male and female fitness. We measured male and female fitness in two panels of spontaneous mutation-accumulation lines of the fly, Drosophila serrata, each established from a common ancestor. One panel of mutation accumulation lines limited both natural and sexual selection (LS lines), whereas the other panel limited natural selection, but allowed sexual selection to operate (SS lines). Although mutation accumulation caused a significant reduction in male and female fitness in both the LS and SS lines, sexual selection had no detectable effect on the extent of the fitness reduction. Similarly, despite evidence of mutational variance for fitness in males and females of both treatments, sexual selection had no significant impact on the amount of mutational genetic variance for fitness. However, sexual selection did reshape the between-sex correlation for fitness: significantly strengthening it in the SS lines. After 25 generations, the between-sex correlation for fitness was positive but considerably less than one in the LS lines, suggesting that, although most mutations had sexually concordant fitness effects, sex-limited, and/or sex-biased mutations contributed substantially to the mutational variance. In the SS lines this correlation was strong and could not be distinguished from unity. Individual-based simulations that mimick the experimental setup reveal two conditions that may drive our results: (1) a modest-to-large fraction of mutations have sex-limited (or highly sex-biased) fitness effects, and (2) the average fitness effect of sex-limited mutations is larger than the average fitness effect of mutations that affect both sexes similarly. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Seizing an opportunity: increasing use of cessation services following a tobacco tax increase.
Keller, Paula A; Greenseid, Lija O; Christenson, Matthew; Boyle, Raymond G; Schillo, Barbara A
2015-04-10
Tobacco tax increases are associated with increases in quitline calls and reductions in smoking prevalence. In 2013, ClearWay Minnesota(SM) conducted a six-week media campaign promoting QUITPLAN® Services (QUITPLAN Helpline and quitplan.com) to leverage the state's tax increase. The purpose of this study was to ascertain the association of the tax increase and media campaign on call volumes, web visits, and enrollments in QUITPLAN Services. In this observational study, call volume, web visits, enrollments, and participant characteristics were analyzed for the periods June-August 2012 and June-August 2013. Enrollment data and information about media campaigns were analyzed using multivariate regression analysis to determine the association of the tax increase on QUITPLAN Services while controlling for media. There was a 160% increase in total combined calls and web visits, and an 81% increase in enrollments in QUITPLAN Services. Helpline call volumes and enrollments declined back to prior year levels approximately six weeks after the tax increase. Visits to and enrollments in quitplan.com also declined, but increased again in mid-August. The tax increase and media explained over 70% of variation in enrollments in the QUITPLAN Helpline, with media explaining 34% of the variance and the tax increase explaining an additional 36.1% of this variance. However, media explained 64% of the variance in quitplan.com enrollments, and the tax increase explained an additional 7.6% of this variance. Since tax increases occur infrequently, these policy changes must be fully leveraged as quickly as possible to help reduce prevalence.
Tarazi, R; Sebbenn, A M; Kageyama, P Y; Vencovsky, R
2013-01-01
Edge effects may affect the mating system of tropical tree species and reduce the genetic diversity and variance effective size of collected seeds at the boundaries of forest fragments because of a reduction in the density of reproductive trees, neighbour size and changes in the behaviour of pollinators. Here, edge effects on the genetic diversity, mating system and pollen pool of the insect-pollinated Neotropical tree Copaifera langsdorffii were investigated using eight microsatellite loci. Open-pollinated seeds were collected from 17 seed trees within continuous savannah woodland (SW) and were compared with seeds from 11 seed trees at the edge of the savannah remnant. Seeds collected from the SW had significantly higher heterozygosity levels (Ho=0.780; He=0.831) than seeds from the edge (Ho=0.702; He=0.800). The multilocus outcrossing rate was significantly higher in the SW (tm=0.859) than in the edge (tm=0.759). Pollen pool differentiation was significant, however, it did not differ between the SW (=0.105) and the edge (=0.135). The variance effective size within the progenies was significantly higher in the SW (Ne=2.65) than at the edge (Ne=2.30). The number of seed trees to retain the reference variance effective size of 500 was 189 at the SW and 217 at the edge. Therefore, it is preferable that seed harvesting for conservation and environmental restoration strategies be conducted in the SW, where genetic diversity and variance effective size within progenies are higher. PMID:23486081
Tarazi, R; Sebbenn, A M; Kageyama, P Y; Vencovsky, R
2013-06-01
Edge effects may affect the mating system of tropical tree species and reduce the genetic diversity and variance effective size of collected seeds at the boundaries of forest fragments because of a reduction in the density of reproductive trees, neighbour size and changes in the behaviour of pollinators. Here, edge effects on the genetic diversity, mating system and pollen pool of the insect-pollinated Neotropical tree Copaifera langsdorffii were investigated using eight microsatellite loci. Open-pollinated seeds were collected from 17 seed trees within continuous savannah woodland (SW) and were compared with seeds from 11 seed trees at the edge of the savannah remnant. Seeds collected from the SW had significantly higher heterozygosity levels (Ho=0.780; He=0.831) than seeds from the edge (Ho=0.702; He=0.800). The multilocus outcrossing rate was significantly higher in the SW (tm=0.859) than in the edge (tm=0.759). Pollen pool differentiation was significant, however, it did not differ between the SW (=0.105) and the edge (=0.135). The variance effective size within the progenies was significantly higher in the SW (Ne=2.65) than at the edge (Ne=2.30). The number of seed trees to retain the reference variance effective size of 500 was 189 at the SW and 217 at the edge. Therefore, it is preferable that seed harvesting for conservation and environmental restoration strategies be conducted in the SW, where genetic diversity and variance effective size within progenies are higher.
Litzow, Michael A.; Piatt, John F.
2003-01-01
We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.
Risk and the evolution of human exchange.
Kaplan, Hillard S; Schniter, Eric; Smith, Vernon L; Wilson, Bart J
2012-08-07
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. 'Lucky' individuals share food with 'unlucky' individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really 'tolerated theft', in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships.
Recent technological advances in computed tomography and the clinical impact therein.
Runge, Val M; Marquez, Herman; Andreisek, Gustav; Valavanis, Anton; Alkadhi, Hatem
2015-02-01
Current technological advances in CT, specifically those with a major impact on clinical imaging, are discussed. The intent was to provide for both medical physicists and practicing radiologists a summary of the clinical impact of each advance, offering guidance in terms of utility and day-to-day clinical implementation, with specific attention to radiation dose reduction.
Cost as a technology driver. [in aerospace R and D
NASA Technical Reports Server (NTRS)
Fitzgerald, P. E., Jr.; Savage, M.
1976-01-01
Cost managment as a guiding factor in optimum development of technology, and proper timing of cost-saving programs in the development of a system or technology with payoffs in development and operational advances are discussed and illustrated. Advances enhancing the performance of hardware or software advances raising productivity or reducing cost, are outlined, with examples drawn from: thermochemical thrust maximization, development of cryogenic storage tanks, improvements in fuel cells for Space Shuttle, design of a spacecraft pyrotechnic initiator, cost cutting by reduction in the number of parts to be joined, and cost cutting by dramatic reductions in circuit component number with small-scale double-diffused integrated circuitry. Program-focused supporting research and technology models are devised to aid judicious timing of cost-conscious research programs.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
NASA Astrophysics Data System (ADS)
Liu, Yahui; Fan, Xiaoqian; Lv, Chen; Wu, Jian; Li, Liang; Ding, Dawei
2018-02-01
Information fusion method of INS/GPS navigation system based on filtering technology is a research focus at present. In order to improve the precision of navigation information, a navigation technology based on Adaptive Kalman Filter with attenuation factor is proposed to restrain noise in this paper. The algorithm continuously updates the measurement noise variance and processes noise variance of the system by collecting the estimated and measured values, and this method can suppress white noise. Because a measured value closer to the current time would more accurately reflect the characteristics of the noise, an attenuation factor is introduced to increase the weight of the current value, in order to deal with the noise variance caused by environment disturbance. To validate the effectiveness of the proposed algorithm, a series of road tests are carried out in urban environment. The GPS and IMU data of the experiments were collected and processed by dSPACE and MATLAB/Simulink. Based on the test results, the accuracy of the proposed algorithm is 20% higher than that of a traditional Adaptive Kalman Filter. It also shows that the precision of the integrated navigation can be improved due to the reduction of the influence of environment noise.
2008-09-01
In a two - stage process the urea decomposes to ammonia (NH3) which then reacts with the nitrogen oxides (NOx) and leads to formation of nitrogen and...Sulphur Fuel (HSF) is a potential problem to NATO forces when vehicles and equipment are fitted with advanced emission reduction devices that require Low...worldwide available, standard fuel (F-34) and equipment capable of using such high sulphur fuels (HSF). Recommendations • Future equipment fitted with
Vilos, George A; Vilos, Angelos G; Abu-Rafea, Basim; Pron, Gaylene; Kozak, Roman; Garvin, Greg
2006-05-01
To determine if goserelin immediately after uterine artery embolization (UAE) affected myoma reduction. Randomized pilot study (level 1). Teaching hospital. Twenty-six women. All patients underwent UAE, and then 12 patients received 10.8 mg of goserelin 24 hours later. The treatment group was 5 years older: 43 versus 37.7 years. Uterine and myoma volumes were measured by ultrasound 2 weeks before UAE and at 3, 6, and 12 months. Uterine and fibroid volumes. Pretreatment uterine volume was 477 versus 556 cm3, and dominant fibroid volume was 257 versus 225 cm3 in the control versus goserelin groups. Analysis of variance measurements indicated that the change over time did not significantly differ between the two groups. By 12 months, the control group had a mean uterine volume reduction of 58%, while the goserelin group had a reduction of 45%. Dominant fibroid changes over time did not differ between the two groups. At 12 months, the mean fibroid volume had decreased by 86% and 58% in the control and goserelin groups, respectively. The addition of goserelin therapy to UAE did not alter the reduction rate or volume of uterine myomas.
Research and Development Project Summaries, October 1991
1991-10-01
delivery methods, training cost reduction, demonstration of technology’ effectiveness, and the reduction of acquisition risk . The majority of the work...demonstrations, risk reduction developments, and cost-effectiveness investigations in simulator and training technologzv. This advanced development program is a...systems. The program is organized around specific demonstration tasks that target critical technical risks that confront future weapons system
ERIC Educational Resources Information Center
Wholeben, Brent Edward
A rationale is presented for viewing the decision-making process inherent in determining budget reductions for educational programs as most effectively modeled by a graduated funding approach. The major tenets of the graduated budget reduction approach to educational fiscal policy include the development of multiple alternative reduction plans, or…
20 CFR 606.25 - Waiver of and substitution for additional tax credit reduction.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., DEPARTMENT OF LABOR TAX CREDITS UNDER THE FEDERAL UNEMPLOYMENT TAX ACT; ADVANCES UNDER TITLE XII OF THE SOCIAL SECURITY ACT Relief From Tax Credit Reduction § 606.25 Waiver of and substitution for additional tax credit reduction. A provision of subsection (c)(2) of section 3302 of FUTA provides that, for a...
Recent advances in reduction methods for nonlinear problems. [in structural mechanics
NASA Technical Reports Server (NTRS)
Noor, A. K.
1981-01-01
Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.
Stanley, Rebecca M.; Ridley, Kate; Olds, Timothy S.; Dollman, James
2014-01-01
Background The lunchtime and after-school contexts are critical windows in a school day for children to be physically active. While numerous studies have investigated correlates of children’s habitual physical activity, few have explored correlates of physical activity occurring at lunchtime and after-school from a social-ecological perspective. Exploring correlates that influence physical activity occurring in specific contexts can potentially improve the prediction and understanding of physical activity. Using a context-specific approach, this study investigated correlates of children’s lunchtime and after-school physical activity. Methods Cross-sectional data were collected from 423 South Australian children aged 10.0–13.9 years (200 boys; 223 girls) attending 10 different schools. Lunchtime and after-school physical activity was assessed using accelerometers. Correlates were assessed using purposely developed context-specific questionnaires. Correlated Component Regression analysis was conducted to derive correlates of context-specific physical activity and determine the variance explained by prediction equations. Results The model of boys’ lunchtime physical activity contained 6 correlates and explained 25% of the variance. For girls, the model explained 17% variance from 9 correlates. Enjoyment of walking during lunchtime was the strongest correlate for both boys and girls. Boys’ and girls’ after-school physical activity models explained 20% variance from 14 correlates and 7% variance from the single item correlate, “I do an organised sport or activity after-school because it gets you fit”, respectively. Conclusions Increasing specificity of correlate research has enabled the identification of unique features of, and a more in-depth interpretation of, lunchtime and after-school physical activity behaviour and is a potential strategy for advancing the physical activity correlate research field. The findings of this study could be used to inform and tailor gender-specific public health messages and interventions for promoting lunchtime and after-school physical activity in children. PMID:24809440
Does participative leadership reduce the onset of mobbing risk among nurse working teams?
Bortoluzzi, Guido; Caporale, Loretta; Palese, Alvisa
2014-07-01
To evaluate the advancement of knowledge on the impact of an empowering leadership style on the risk of mobbing behaviour among nurse working teams. The secondary aim was to evaluate, along with leadership style, the contribution of other organisational- and individual-related mobbing predictors. The style of leadership in reducing the onset of mobbing risk in nurse working teams still remains a matter of discussion. Nurse working teams are particularly affected by mobbing and studies exploring individual and organisational inhibiting/modulating factors are needed. An empirical study involving 175 nurses of various public hospital corporations in northern Italy. Data were collected via structured and anonymous questionnaires and analysed through a logistic regression. Organisational, individual and participative leadership variables explained 33.5% (P < 0.01) of variance in the onset of mobbing. Two predictive factors emerged: a participative leadership enacted by nursing managers and the nursing shortage as perceived by clinical nurses. Results confirmed that the contribution made by a participative leadership style in attenuating the onset of mobbing risk in working teams was significant. A participative leadership style adopted by the nurse manager allows for the reduction of tensions in nurse working teams. However, mobbing remains a multifaceted phenomenon that is difficult to capture in its entirety and the leadership style cannot be considered as a panacea for resolving this problem in nurse working teams. © 2013 John Wiley & Sons Ltd.
Crotin, Ryan L; Forsythe, Charles M; Bhan, Shivam; Karakolis, Thomas
2014-10-01
Major League Baseball (MLB) players have not been longitudinally examined for changes in physical size. Height, weight, and body mass indices (BMIs) were examined among offensive league leaders (OLL) and MLB reference cohorts at 1970, 1990, and 2010. Anthropometric values were expected to increase successively, where OLL were expected to be larger at each respective time point. A Mixed Model analysis of variance (p ≤ 0.05) examined anthropometric differences over time within and between groups. Mass and BMI increased over successive years with the largest effect seen between 1990 and 2010 (p < 0.001). A significant height reduction was shown for OLL from 1970 to 1990 (p ≤ 0.05), being the only significant decrease in physical size; yet, leaders were heavier and taller compared with the MLB reference population (p < 0.014). Results show that physical size has evolved in MLB, with the OLL being the largest players shown at each year in succession. Professional baseball scouts may have been influenced by greater offensive prowess shown by larger athletes; yet, increased secular anthropometrics must also be factored in greater heights, weights, BMIs shown over time in MLB. It is possible that greater participation in strength and conditioning programs at an earlier age, advances in sport nutrition, and potential abuse of anabolic drugs are factors perpetuating growth rates at present.
Earnshaw, Valerie A.; Jin, Harry; Wickersham, Jeffrey; Kamarulzaman, Adeeba; John, Jacob; Altice, Frederick L.
2015-01-01
OBJECTIVES Stigma towards people living with HIV/AIDS (PLWHA) is strong in Malaysia. Although stigma has been understudied, it may be a barrier to treating the approximately 81 000 Malaysian PLWHA. The current study explores correlates of intentions to discriminate against PLWHA among medical and dental students, the future healthcare providers of Malaysia. METHODS An online, cross-sectional survey of 1296 medical and dental students was conducted in 2012 at seven Malaysian universities; 1165 (89.9%) completed the survey and were analysed. Sociodemographic characteristics, stigma-related constructs and intentions to discriminate against PLWHA were measured. Linear mixed models were conducted, controlling for clustering by university. RESULTS The final multivariate model demonstrated that students who intended to discriminate more against PLWHA were female, less advanced in their training, and studying dentistry. They further endorsed more negative attitudes towards PLWHA, internalised greater HIV-related shame, reported more HIV-related fear and disagreed more strongly that PLWHA deserve good care. The final model accounted for 38% of the variance in discrimination intent, with 10% accounted for by sociodemographic characteristics and 28% accounted for by stigma-related constructs. CONCLUSIONS It is critical to reduce stigma among medical and dental students to eliminate intentions to discriminate and achieve equitable care for Malaysian PLWHA. Stigma-reduction interventions should be multipronged, addressing attitudes, internalised shame, fear and perceptions of deservingness of care. PMID:24666546
Earnshaw, Valerie A; Jin, Harry; Wickersham, Jeffrey; Kamarulzaman, Adeeba; John, Jacob; Altice, Frederick L
2014-06-01
Stigma towards people living with HIV/AIDS (PLWHA) is strong in Malaysia. Although stigma has been understudied, it may be a barrier to treating the approximately 81 000 Malaysian PLWHA. The current study explores correlates of intentions to discriminate against PLWHA among medical and dental students, the future healthcare providers of Malaysia. An online, cross-sectional survey of 1296 medical and dental students was conducted in 2012 at seven Malaysian universities; 1165 (89.9%) completed the survey and were analysed. Socio-demographic characteristics, stigma-related constructs and intentions to discriminate against PLWHA were measured. Linear mixed models were conducted, controlling for clustering by university. The final multivariate model demonstrated that students who intended to discriminate more against PLWHA were female, less advanced in their training, and studying dentistry. They further endorsed more negative attitudes towards PLWHA, internalised greater HIV-related shame, reported more HIV-related fear and disagreed more strongly that PLWHA deserve good care. The final model accounted for 38% of the variance in discrimination intent, with 10% accounted for by socio-demographic characteristics and 28% accounted for by stigma-related constructs. It is critical to reduce stigma among medical and dental students to eliminate intentions to discriminate and achieve equitable care for Malaysian PLWHA. Stigma-reduction interventions should be multipronged, addressing attitudes, internalised shame, fear and perceptions of deservingness of care. © 2014 John Wiley & Sons Ltd.
Yakobov, Esther; Scott, Whitney; Stanish, William D; Tanzer, Michael; Dunbar, Michael; Richardson, Glen; Sullivan, Michael J L
2018-05-01
Perceptions of injustice have been associated with problematic recovery outcomes in individuals with a wide range of debilitating pain conditions. It has been suggested that, in patients with chronic pain, perceptions of injustice might arise in response to experiences characterized by illness-related pain severity, depressive symptoms, and disability. If symptoms severity and disability are important contributors to perceived injustice (PI), it follows that interventions that yield reductions in symptom severity and disability should also contribute to reductions in perceptions of injustice. The present study examined the relative contributions of postsurgical reductions in pain severity, depressive symptoms, and disability to the prediction of reductions in perceptions of injustice. The study sample consisted of 110 individuals (69 women and 41 men) with osteoarthritis of the knee scheduled for total knee arthroplasty (TKA). Patients completed measures of perceived injustice, depressive symptoms, pain, and disability at their presurgical evaluation, and at 1-year follow-up. The results revealed that reductions in depressive symptoms and disability, but not pain severity, were correlated with reductions in perceived injustice. Regression analyses revealed that reductions in disability and reductions in depressive symptoms contributed modest but significant unique variance to the prediction of postsurgical reductions in perceived injustice. The present findings are consistent with current conceptualizations of injustice appraisals that propose a central role for symptom severity and disability as determinants of perceptions of injustice in patients with persistent pain. The results suggest that the inclusion of psychosocial interventions that target depressive symptoms and perceived injustice might augment the impact of rehabilitation programs made available for individuals recovering from TKA.
Research requirements to reduce empty weight of helicopters by use of advanced materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffstedt, D.J.
1976-12-01
Utilization of the new, lightweight, high-strength, aerospace structural-composite (filament/matrix) materials, when specifically designed into a new aircraft, promises reductions in structural empty weight of 12% at recurring costs competetive with metals. A program of basic and applied research and demonstration is identified with the objective of advancing the state of the art to the point where civil helicopters are confidently designed, produced, certified, and marketed by 1985. A structural empty-weight reduction of 12% was shown to significantly reduce energy consumption in modern high-performance helicopters.
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
The effect of deep-tissue massage therapy on blood pressure and heart rate.
Kaye, Alan David; Kaye, Aaron J; Swinford, Jan; Baluch, Amir; Bawcom, Brad A; Lambert, Thomas J; Hoover, Jason M
2008-03-01
In the present study, we describe the effects of deep tissue massage on systolic, diastolic, and mean arterial blood pressure. The study involved 263 volunteers (12% males and 88% females), with an average age of 48.5. Overall muscle spasm/muscle strain was described as either moderate or severe for each patient. Baseline blood pressure and heart rate were measured via an automatic blood pressure cuff. Twenty-one (21) different soothing CDs played in the background as the deep tissue massage was performed over the course of the study. The massages were between 45 and 60 minutes in duration. The data were analyzed using analysis of variance with post-hoc Scheffe's F-test. Results of the present study demonstrated an average systolic pressure reduction of 10.4 mm Hg (p<0.06), a diastolic pressure reduction of 5.3 mm Hg (p<0.04), a mean arterial pressure reduction of 7.0 mm Hg (p<0.47), and an average heart rate reduction of 10.8 beats per minute (p<0.0003), respectively. Additional scientific research in this area is warranted.
2010-07-01
cluster input can look like a Fractional Brownian motion even in the slow growth regime’’. Advances in Applied Probability, 41(2), 393-427. Yeghiazarian, L... Brownian motion ? Ann. Appl. Probab., 12(1):23–68, 2002. [10] A. Mitra and S.I. Resnick. Hidden domain of attraction: extension of hidden regular variation...variance? A paradox and an explanation’’. Quantitative Finance , 1, 11 pages. Hult, H. and Samorodnitsky, G. (2010) ``Large deviations for point
2012-08-01
It suggests that a smart use of some a-priori information about the operating environment, when processing the received signal and designing the...random variable with the same variance of the backscattering target amplitude αT , and D ( αT , α G T ) is the Kullback − Leibler divergence, see [65...MI . Proof. See Appendix 3.6.6. Thus, we can use the optimization procedure of Algorithm 4 to optimize the Mutual Information between the target
Profitless delays for extinction in nonautonomous Lotka-Volterra system
NASA Astrophysics Data System (ADS)
Liu, Shengqiang; Chen, Lansun
2001-12-01
We study the delayed periodic n-species Lotka-Voterra systems where the growth rate of each species is not always positive. The sufficient conditions for the extinction that are independent of the delays are obtained. Some known results are improved and generalized. Our results suggest that under some conditions, the introduction and the variance of the time delays can be both harmless and profitless. Discussion about the effect of time delays on the extinction of the system is also advanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul Imhoff; Ramin Yazdani; Don Augenstein
Methane is an important contributor to global warming with a total climate forcing estimated to be close to 20% that of carbon dioxide (CO2) over the past two decades. The largest anthropogenic source of methane in the US is 'conventional' landfills, which account for over 30% of anthropogenic emissions. While controlling greenhouse gas emissions must necessarily focus on large CO2 sources, attention to reducing CH4 emissions from landfills can result in significant reductions in greenhouse gas emissions at low cost. For example, the use of 'controlled' or bioreactor landfilling has been estimated to reduce annual US greenhouse emissions by aboutmore » 15-30 million tons of CO2 carbon (equivalent) at costs between $3-13/ton carbon. In this project we developed or advanced new management approaches, landfill designs, and landfill operating procedures for bioreactor landfills. These advances are needed to address lingering concerns about bioreactor landfills (e.g., efficient collection of increased CH4 generation) in the waste management industry, concerns that hamper bioreactor implementation and the consequent reductions in CH4 emissions. Collectively, the advances described in this report should result in better control of bioreactor landfills and reductions in CH4 emissions. Several advances are important components of an Intelligent Bioreactor Management Information System (IBM-IS).« less
Great Expectations in the Joint Advanced Manufacturing Region
2016-12-01
would be continuous experimentation and risk reduction prototyping. The entire manufacturing life cycle— design , testing, product development...on the back of a napkin, they decided to call their effort the Joint Advanced Manufacturing Region (JAMR) and manage it as an Integrated Product ... designed to support the continuous experimentation of advanced manufacturing tactics, tech- niques and procedures under actual operational or combat
Managing toxicities and optimal dosing of targeted drugs in advanced kidney cancer
Seruga, B.; Gan, H.K.; Knox, J.J.
2009-01-01
The toxicities of new, targeted drugs may diminish their effectiveness in advanced kidney cancer if those toxicities are not recognized and properly addressed early in patient treatment. Most of the drug-related toxicities in advanced kidney cancer are manageable with supportive care, obviating a need for long interruptions, dose reductions, or permanent discontinuation of the treatment. PMID:19478903
The environmental control and life support system advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1991-01-01
The objective of the ECLSS Advanced Automation project includes reduction of the risk associated with the integration of new, beneficial software techniques. Demonstrations of this software to baseline engineering and test personnel will show the benefits of these techniques. The advanced software will be integrated into ground testing and ground support facilities, familiarizing its usage by key personnel.
Resin transfer molding for advanced composite primary wing and fuselage structures
NASA Technical Reports Server (NTRS)
Markus, Alan
1992-01-01
The stitching and resin transfer molding (RTM) processes developed at Douglas Aircraft Co. are successfully demonstrating significant cost reductions with good damage tolerance properties. These attributes were identified as critical to application of advanced composite materials to commercial aircraft primary structures. The RTM/stitching developments, cost analyses, and test results are discussed of the NASA Advanced Composites Technology program.
NASA Technical Reports Server (NTRS)
Rana, D. S.
1980-01-01
The data reduction capabilities of the current data reduction programs were assessed and a search for a more comprehensive system with higher data analytic capabilities was made. Results of the investigation are presented.
Cognitive learning strategies: their effectiveness in acquiring racquetball skill.
Tennant, L M
2000-06-01
Racquetball players were compared to assess whether a Self-directed strategy (self-monitoring), a Task-oriented strategy (attentional focusing), or a Combined use of both strategies would be beneficial in acquisition of racquetball skills. According to skill (Beginning, Advanced), players (N=80) were assigned into treatment groups. After treatment, participants executed diagonal lob serves and rallies for Acquisition and Retention phases (Session 1). During Session 2, subjects competed in a modified play setting (Transfer phase). Analysis of variance with repeated measures showed differences by skill during the basic tests favored Advanced players. During modified play, the Task-oriented group won significantly more points and games compared to the Self-directed and Control groups, regardless of skill. Results are discussed relative to skill and the literature on learning strategies.
Advances in Photocatalytic CO₂ Reduction with Water: A Review.
Nahar, Samsun; Zain, M F M; Kadhum, Abdul Amir H; Hasan, Hassimi Abu; Hasan, Md Riad
2017-06-08
In recent years, the increasing level of CO₂ in the atmosphere has not only contributed to global warming but has also triggered considerable interest in photocatalytic reduction of CO₂. The reduction of CO₂ with H₂O using sunlight is an innovative way to solve the current growing environmental challenges. This paper reviews the basic principles of photocatalysis and photocatalytic CO₂ reduction, discusses the measures of the photocatalytic efficiency and summarizes current advances in the exploration of this technology using different types of semiconductor photocatalysts, such as TiO₂ and modified TiO₂, layered-perovskite Ag/ALa₄Ti₄O 15 (A = Ca, Ba, Sr), ferroelectric LiNbO₃, and plasmonic photocatalysts. Visible light harvesting, novel plasmonic photocatalysts offer potential solutions for some of the main drawbacks in this reduction process. Effective plasmonic photocatalysts that have shown reduction activities towards CO₂ with H₂O are highlighted here. Although this technology is still at an embryonic stage, further studies with standard theoretical and comprehensive format are suggested to develop photocatalysts with high production rates and selectivity. Based on the collected results, the immense prospects and opportunities that exist in this technique are also reviewed here.
Fernandez, E; Williams, D G
2009-10-01
The implementation of the European Working Time Directive (WTD) has reduced the hours worked by trainees in the UK to a maximum of 56 h per week. With a further and final reduction to 48 h per week scheduled for August 2009, there is concern amongst doctors about the impact on training and on patient care. Paediatric anaesthesia is one of the specialist areas of anaesthesia for which the Royal College of Anaesthetists (RCoA) recommends a minimum caseload during the period of advanced training. We conducted a retrospective analysis of theatre logbook data from 62 Specialist Registrars (SpRs) who had completed a 12 month period of advanced training in paediatric anaesthesia in our institution between 2000 and 2007. After the implementation of the WTD 56 h week in 2004, the mean total number of cases performed by SpRs per year decreased from 441 to 336, a 24% reduction. We found a statistically significant reduction across all age groups with the largest reduction in the under 1 month of age group. The post-WTD group did not meet the RCoA recommended total minimum caseload or the minimum number of cases of <1 yr of age. Since the implementation of the WTD, there has been a significant reduction in the number of cases performed by SpRs in paediatric anaesthesia and they are no longer achieving the RCoA recommended minimum numbers for advanced training.
Energy reduction for the spot welding process in the automotive industry
NASA Astrophysics Data System (ADS)
Cullen, J. D.; Athi, N.; Al-Jader, M. A.; Shaw, A.; Al-Shamma'a, A. I.
2007-07-01
When performing spot welding on galvanised metals, higher welding force and current are required than on uncoated steels. This has implications for the energy usage when creating each spot weld, of which there are approximately 4300 in each passenger car. The paper presented is an overview of electrode current selection and its variance over the lifetime of the electrode tip. This also describes the proposed analysis system for the selection of welding parameters for the spot welding process, as the electrode tip wears.
Microprocessor realizations of range rate filters
NASA Technical Reports Server (NTRS)
1979-01-01
The performance of five digital range rate filters is evaluated. A range rate filter receives an input of range data from a radar unit and produces an output of smoothed range data and its estimated derivative range rate. The filters are compared through simulation on an IBM 370. Two of the filter designs are implemented on a 6800 microprocessor-based system. Comparisons are made on the bases of noise variance reduction ratios and convergence times of the filters in response to simulated range signals.
Transport Test Problems for Hybrid Methods Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Numerical Algorithm for Delta of Asian Option
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Environmentally Responsible Aviation N plus 2 Advanced Vehicle Study
NASA Technical Reports Server (NTRS)
Drake, Aaron; Harris, Christopher A.; Komadina, Steven C.; Wang, Donny P.; Bender, Anne M.
2013-01-01
This is the Northrop Grumman final report for the Environmentally Responsible Aviation (ERA) N+2 Advanced Vehicle Study performed for the National Aeronautics and Space Administration (NASA). Northrop Grumman developed advanced vehicle concepts and associated enabling technologies with a high potential for simultaneously achieving significant reductions in emissions, airport area noise, and fuel consumption for transport aircraft entering service in 2025. A Preferred System Concept (PSC) conceptual design has been completed showing a 42% reduction in fuel burn compared to 1998 technology, and noise 75dB below Stage 4 for a 224- passenger, 8,000 nm cruise transport aircraft. Roadmaps have been developed for the necessary technology maturation to support the PSC. A conceptual design for a 55%-scale demonstrator aircraft to reduce development risk for the PSC has been completed.
Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics
NASA Technical Reports Server (NTRS)
Bushnell, Dennis M.
2000-01-01
This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.
Preliminary design and analysis of an advanced rotorcraft transmission
NASA Technical Reports Server (NTRS)
Henry, Z. S.
1990-01-01
Future rotorcraft transmissions of the 1990s and beyond the year 2000 require the incorporation of key emerging material and component technologies using advanced and innovative design practices in order to meet the requirements for a reduced weight-to-power ratio, a decreased noise level, and a substantially increased reliability. The specific goals for future rotocraft transmissions when compared with current state-of-the-art transmissions are a 25 percent weight reduction, a 10-dB reduction in the transmitted noise level, and a system reliability of 5000 hours mean-time-between-removal for the transmission. This paper presents the results of the design studies conducted to meet the stated goals for an advanced rotorcraft transmission. These design studies include system configuration, planetary gear train selection, and reliability prediction methods.
Recent advances in the design of tailored nanomaterials for efficient oxygen reduction reaction
Lv, Haifeng; Li, Dongguo; Strmcnik, Dusan; ...
2016-04-11
In the past decade, polymer electrolyte membrane fuels (PEMFCs) have been evaluated for both automotive and stationary applications. One of the main obstacles for large scale commercialization of this technology is related to the sluggish oxygen reduction reaction that takes place on the cathode side of fuel cell. Consequently, ongoing research efforts are focused on the design of cathode materials that could improve the kinetics and durability. Majority of these efforts rely on novel synthetic approaches that provide control over the structure, size, shape and composition of catalytically active materials. This article highlights the most recent advances that have beenmore » made to tailor critical parameters of the nanoscale materials in order to achieve more efficient performance of the oxygen reduction reaction (ORR).« less
Potential reduction of en route noise from an advanced turboprop aircraft
NASA Technical Reports Server (NTRS)
Dittmar, James H.
1990-01-01
When the en route noise of a representative aircraft powered by an eight-blade SR-7 propeller was previously calculated, the noise level was cited as a possible concern associated with the acceptance of advanced turboprop aircraft. Some potential methods for reducing the en route noise were then investigated and are reported. Source noise reductions from increasing the blade number and from operating at higher rotative speed to reach a local minimum noise point were investigated. Greater atmospheric attenuations for higher blade passing frequencies were also indicated. Potential en route noise reductions from these methods were calculated as 9.5 dB (6.5 dB(A)) for a 10-blade redesigned propeller and 15.5 dB (11 dB(A)) for a 12-blade redesigned propeller.
Experimental Design for Evaluating the Safety Benefits of Railroad Advance Warning Signs
DOT National Transportation Integrated Search
1979-04-01
The report presents the findings and conclusions of a study to develop an experimental design and analysis plan for field testing and evaluation of the accident reduction potential of a proposed new railroad grade crossing advance warning sign. Sever...
Schmid, Patrick; Yao, Hui; Galdzicki, Michal; Berger, Bonnie; Wu, Erxi; Kohane, Isaac S.
2009-01-01
Background Although microarray technology has become the most common method for studying global gene expression, a plethora of technical factors across the experiment contribute to the variable of genome gene expression profiling using peripheral whole blood. A practical platform needs to be established in order to obtain reliable and reproducible data to meet clinical requirements for biomarker study. Methods and Findings We applied peripheral whole blood samples with globin reduction and performed genome-wide transcriptome analysis using Illumina BeadChips. Real-time PCR was subsequently used to evaluate the quality of array data and elucidate the mode in which hemoglobin interferes in gene expression profiling. We demonstrated that, when applied in the context of standard microarray processing procedures, globin reduction results in a consistent and significant increase in the quality of beadarray data. When compared to their pre-globin reduction counterparts, post-globin reduction samples show improved detection statistics, lowered variance and increased sensitivity. More importantly, gender gene separation is remarkably clearer in post-globin reduction samples than in pre-globin reduction samples. Our study suggests that the poor data obtained from pre-globin reduction samples is the result of the high concentration of hemoglobin derived from red blood cells either interfering with target mRNA binding or giving the pseudo binding background signal. Conclusion We therefore recommend the combination of performing globin mRNA reduction in peripheral whole blood samples and hybridizing on Illumina BeadChips as the practical approach for biomarker study. PMID:19381341
Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.
Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang
2016-05-01
In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Cawthorn, J. M.; Brown, C. G.
1974-01-01
A study has been conducted of the future noise environment of Patric Henry Airport and its neighboring communities projected for the year 1990. An assessment was made of the impact of advanced noise reduction technologies which are currently being considered. These advanced technologies include a two-segment landing approach procedure and aircraft hardware modifications or retrofits which would add sound absorbent material in the nacelles of the engines or which would replace the present two- and three-stage fans with a single-stage fan of larger diameter. Noise Exposure Forecast (NEF) contours were computed for the baseline (nonretrofitted) aircraft for the projected traffic volume and fleet mix for the year 1990. These NEF contours are presented along with contours for a variety of retrofit options. Comparisons of the baseline with the noise reduction options are given in terms of total land area exposed to 30 and 40 NEF levels. Results are also presented of the effects on noise exposure area of the total number of daily operations.
Patterns and predictors of growth in divorced fathers' health status and substance use.
DeGarmo, David S; Reid, John B; Leve, Leslie D; Chamberlain, Patricia; Knutson, John F
2010-03-01
Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2016-09-30
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Marketing and reputation aspects of neonatal safeguards and hospital-security systems.
Smith, Alan D
2009-01-01
Technological advancements have migrated from personal-use electronics into the healthcare setting for security enhancements. Within maternity wards and nurseries, technology was seen as one of best way to protect newborns from abduction. The present study is a focus on what systems and methods are used in neonatal security, the security arrangements, staff training, and impacts outside the control of the hospital, customer satisfaction and customer relations management. Through hypothesis-testing and exploratory analysis, gender biases and extremely high levels of security were found within a web-enabled and professional sample of 200 respondents. The factor-based constructs were found to be, in order of the greatest explained variance: security concerns, personal technology usage, work technology applications, and demographic maturity concerns, resulting in four factor-based scores with significant combined variance of 61.5%. It was found that through a better understanding on the importance and vital need for hospitals to continue to improve on their technology-based security policies significantly enhanced their reputation in the highly competitive local healthcare industry.
Subha, Bakthavachallam; Song, Young Chae; Woo, Jung Hui
2015-09-15
The present study aims to optimize the slow release biostimulant ball (BSB) for bioremediation of contaminated coastal sediment using response surface methodology (RSM). Different bacterial communities were evaluated using a pyrosequencing-based approach in contaminated coastal sediments. The effects of BSB size (1-5cm), distance (1-10cm) and time (1-4months) on changes in chemical oxygen demand (COD) and volatile solid (VS) reduction were determined. Maximum reductions of COD and VS, 89.7% and 78.8%, respectively, were observed at a 3cm ball size, 5.5cm distance and 4months; these values are the optimum conditions for effective treatment of contaminated coastal sediment. Most of the variance in COD and VS (0.9291 and 0.9369, respectively) was explained in our chosen models. BSB is a promising method for COD and VS reduction and enhancement of SRB diversity. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, D.L.; Arbaugh, M.J.; Wakefield, V.A.
1987-08-01
Evidence is presented for a reduction in radial growth of Jeffrey pine in the mixed conifer forest of Sequoia and Kings Canyon National Parks, California. Mean annual radial increment of trees with symptoms of ozone injury was 11% less than trees at sites without ozone injury. Larger diameter trees (>40 cm) and older trees (>100 yr) had greater decreases in growth than smaller and younger trees. Differences in radial growth patterns of injured and uninjured trees were prominent after 1965. Winter precipitation accounted for a large proportion of the variance in growth of all trees, although ozone-stressed trees were moremore » sensitive to interannual variation in precipitation and temperature during recent years. These results corroborates surveys in visible ozone injury to foliage and are the first evidence of forest growth reduction associated with ozone injury in North America outside the Los Angeles basin.« less
Comparison of noise reduction systems
NASA Astrophysics Data System (ADS)
Noel, S. D.; Whitaker, R. W.
1991-06-01
When using infrasound as a tool for verification, the most important measurement to determine yield has been the peak-to-peak pressure amplitude of the signal. Therefore, there is a need to operate at the most favorable signal-to-noise ratio (SNR) possible. Winds near the ground can degrade the SNR, thereby making accurate signal amplitude measurement difficult. Wind noise reduction techniques were developed to help alleviate this problem; however, a noise reducing system should reduce the noise, and should not introduce distortion of coherent signals. An experiment is described to study system response for a variety of noise reducing configurations to a signal generated by an underground test (UGT) at the Nevada Test Site (NTS). In addition to the signal, background noise reduction is examined through measurements of variance. Sensors using two particular geometries of noise reducing equipment, the spider and the cross appear to deliver the best SNR. Because the spider configuration is easier to deploy, it is now the most commonly used.
Psychometric Properties of the Dietary Salt Reduction Self-Care Behavior Scale.
Srikan, Pratsani; Phillips, Kenneth D
2014-07-01
Valid, reliable, and culturally-specific scales to measure salt reduction self-care behavior in older adults are needed. The purpose of this study was to develop the Dietary Salt Reduction Self-Care Behavior Scale (DSR-SCB) for use in hypertensive older adults with Orem's self-care deficit theory as a base. Exploratory factor analysis, Rasch modeling, and reliability were performed on data from 242 older Thai adults. Nine items loaded on one factor (factor loadings = 0.63 to 0.79) and accounted for 52.28% of the variance (Eigenvalue = 4.71). The Kaiser-Meyer-Olkin method of sampling adequacy was 0.89, and the Bartlett's test showed significance (χ 2 ( df =36 ) = 916.48, p < 0.0001). Infit and outfit mean squares ranged from 0.81 to 1.25, while infit and outfit standardized mean squares were located at ±2. Cronbach's alpha was 0.88. The 9-item DSR-SCB is a short and reliable scale. © The Author(s) 2014.
Kuffner, Ilsa B.; Brock, John C.; Grober-Dunsmore, Rikki; Bonito, Victor E.; Hickey, T. Donald; Wright, C. Wayne
2007-01-01
The realization that coral reef ecosystem management must occur across multiple spatial scales and habitat types has led scientists and resource managers to seek variables that are easily measured over large areas and correlate well with reef resources. Here we investigate the utility of new technology in airborne laser surveying (NASA Experimental Advanced Airborne Research Lidar (EAARL)) in assessing topographical complexity (rugosity) to predict reef fish community structure on shallow (n = 10–13 per reef). Rugosity at each station was assessed in situ by divers using the traditional chain-transect method (10-m scale), and remotely using the EAARL submarine topography data at multiple spatial scales (2, 5, and 10 m). The rugosity and biological datasets were analyzed together to elucidate the predictive power of EAARL rugosity in describing the variance in reef fish community variables and to assess the correlation between chain-transect and EAARL rugosity. EAARL rugosity was not well correlated with chain-transect rugosity, or with species richness of fishes (although statistically significant, the amount of variance explained by the model was very low). Variance in reef fish community attributes was better explained in reef-by-reef variability than by physical variables. However, once the reef-by-reef variability was taken into account in a two-way analysis of variance, the importance of rugosity could be seen on individual reefs. Fish species richness and abundance were statistically higher at high rugosity stations compared to medium and low rugosity stations, as predicted by prior ecological research. The EAARL shows promise as an important mapping tool for reef resource managers as they strive to inventory and protect coral reef resources.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
... Unemployment Tax Act (FUTA) provide that employers in a state that has an outstanding balance of advances under... balance of advances remains at the beginning of November 10 of that year. Because the account of South Carolina in the Unemployment Trust Fund had a balance of advances at the beginning of January 1 of 2009...
Comparison of patients' and health care professionals' attitudes towards advance directives.
Blondeau, D; Valois, P; Keyserlingk, E W; Hébert, M; Lavoie, M
1998-01-01
OBJECTIVES: This study was designed to identify and compare the attitudes of patients and health care professionals towards advance directives. Advance directives promote recognition of the patient's autonomy, letting the individual exercise a certain measure of control over life-sustaining care and treatment in the eventuality of becoming incompetent. DESIGN: Attitudes to advance directives were evaluated using a 44-item self-reported questionnaire. It yields an overall score as well as five factor scores: autonomy, beneficence, justice, external norms, and the affective dimension. SETTING: Health care institutions in the province of Québec, Canada. Survey sample: The sampling consisted of 921 subjects: 123 patients, 167 physicians, 340 nurses and 291 administrators of health care institutions. RESULTS: Although the general attitude of each population was favourable to the expression of autonomy, multivariate analysis of variance (MANOVA) indicated that physicians attached less importance to this subscale than did other populations (p < .001). Above all, they favoured legal external norms and beneficence. Physicians and administrators also attached less importance to the affective dimension than did patients and nurses. Specifically, physicians' attitudes towards advance directives were shown to be less positive than patients' attitudes. CONCLUSION: More attention should be given to the importance of adequately informing patients about advance directives because they may not represent an adequate means for patients to assert their autonomy. PMID:9800589
Research requirements to reduce empty weight of helicopters by use of advanced materials
NASA Technical Reports Server (NTRS)
Hoffstedt, D. J.
1976-01-01
Utilization of the new, lightweight, high-strength, aerospace structural-composite (filament/matrix) materials, when specifically designed into a new aircraft, promises reductions in structural empty weight of 12 percent at recurring costs competive with metals. A program of basic and applied research and demonstration is identified with the objective of advancing the state of the art to the point where civil helicopters are confidently designed, produced, certified, and marketed by 1985. A structural empty-weight reduction of 12 percent was shown to significantly reduce energy consumption in modern high-performance helicopters.
Butera, Katie A; George, Steven Z; Borsa, Paul A; Dover, Geoffrey C
2018-03-05
Transcutaneous electrical nerve stimulation (TENS) is commonly used for reducing musculoskeletal pain to improve function. However, peripheral nerve stimulation using TENS can alter muscle motor output. Few studies examine motor outcomes following TENS in a human pain model. Therefore, this study investigated the influence of TENS sensory stimulation primarily on motor output (strength) and secondarily on pain and disability following exercise-induced delayed-onset muscle soreness (DOMS). Thirty-six participants were randomized to a TENS treatment, TENS placebo, or control group after completing a standardized DOMS protocol. Measures included shoulder strength, pain, mechanical pain sensitivity, and disability. TENS treatment and TENS placebo groups received 90 minutes of active or sham treatment 24, 48, and 72 hours post-DOMS. All participants were assessed daily. A repeated measures analysis of variance and post-hoc analysis indicated that, compared to the control group, strength remained reduced in the TENS treatment group (48 hours post-DOMS, P < 0.05) and TENS placebo group (48 hours post-DOMS, P < 0.05; 72 hours post-DOMS, P < 0.05). A mixed-linear modeling analysis was conducted to examine the strength (motor) change. Randomization group explained 5.6% of between-subject strength variance (P < 0.05). Independent of randomization group, pain explained 8.9% of within-subject strength variance and disability explained 3.3% of between-subject strength variance (both P < 0.05). While active and placebo TENS resulted in prolonged strength inhibition, the results were nonsignificant for pain. Results indicated that higher pain and higher disability were independently related to decreased strength. Regardless of the impact on pain, TENS, or even the perception of TENS, may act as a nocebo for motor output. © 2018 World Institute of Pain.
Sex-specific selection under environmental stress in seed beetles.
Martinossi-Allibert, I; Arnqvist, G; Berger, D
2017-01-01
Sexual selection can increase rates of adaptation by imposing strong selection in males, thereby allowing efficient purging of the mutation load on population fitness at a low demographic cost. Indeed, sexual selection tends to be male-biased throughout the animal kingdom, but little empirical work has explored the ecological sensitivity of this sex difference. In this study, we generated theoretical predictions of sex-specific strengths of selection, environmental sensitivities and genotype-by-environment interactions and tested them in seed beetles by manipulating either larval host plant or rearing temperature. Using fourteen isofemale lines, we measured sex-specific reductions in fitness components, genotype-by-environment interactions and the strength of selection (variance in fitness) in the juvenile and adult stage. As predicted, variance in fitness increased with stress, was consistently greater in males than females for adult reproductive success (implying strong sexual selection), but was similar in the sexes in terms of juvenile survival across all levels of stress. Although genetic variance in fitness increased in magnitude under severe stress, heritability decreased and particularly so in males. Moreover, genotype-by-environment interactions for fitness were common but specific to the type of stress, sex and life stage, suggesting that new environments may change the relative alignment and strength of selection in males and females. Our study thus exemplifies how environmental stress can influence the relative forces of natural and sexual selection, as well as concomitant changes in genetic variance in fitness, which are predicted to have consequences for rates of adaptation in sexual populations. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Risk and the evolution of human exchange
Kaplan, Hillard S.; Schniter, Eric; Smith, Vernon L.; Wilson, Bart J.
2012-01-01
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. ‘Lucky’ individuals share food with ‘unlucky’ individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really ‘tolerated theft’, in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships. PMID:22513855
Large contribution of natural aerosols to uncertainty in indirect forcing
NASA Astrophysics Data System (ADS)
Carslaw, K. S.; Lee, L. A.; Reddington, C. L.; Pringle, K. J.; Rap, A.; Forster, P. M.; Mann, G. W.; Spracklen, D. V.; Woodhouse, M. T.; Regayre, L. A.; Pierce, J. R.
2013-11-01
The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.
NASA Astrophysics Data System (ADS)
Demirkaya, Omer
2001-07-01
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi
2010-01-01
This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software “Grouper”. This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making. PMID:20936134
Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi
2010-01-01
This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software "Grouper". This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making.
NASA Astrophysics Data System (ADS)
Liu, Lu; Hejazi, Mohamad; Li, Hongyi; Forman, Barton; Zhang, Xiao
2017-08-01
Previous modelling studies suggest that thermoelectric power generation is vulnerable to climate change, whereas studies based on historical data suggest the impact will be less severe. Here we explore the vulnerability of thermoelectric power generation in the United States to climate change by coupling an Earth system model with a thermoelectric power generation model, including state-level representation of environmental regulations on thermal effluents. We find that the impact of climate change is lower than in previous modelling estimates due to an inclusion of a spatially disaggregated representation of environmental regulations and provisional variances that temporarily relieve power plants from permit requirements. More specifically, our results indicate that climate change alone may reduce average generating capacity by 2-3% by the 2060s, while reductions of up to 12% are expected if environmental requirements are enforced without waivers for thermal variation. Our work highlights the significance of accounting for legal constructs and underscores the effects of provisional variances in addition to environmental requirements.
Kruppa, B; Rüden, H
1993-05-01
The question was if a reduction of airborne particles and bacteria in conventionally (turbulently), ventilated operating theatres in comparison to Laminar-Airflow (LAF) operating theatres does occur at high air-exchange-rates. Within the framework of energy consumption measures the influence of air-exchange-rates on airborne particle and bacteria concentrations was determined in two identical operating theatres with conventional ventilation (wall diffusor panel) at the air-exchange-rates 7.5, 10, 15 and 20/h without surgical activity. This was established by means of the statistical procedure of analysis of variance. Especially for the comparison of the air-exchange-rates 7.5 and 15/h statistical differences were found for airborne particle concentrations in supply and ambient air. Concerning airborne bacteria concentrations no differences were found among the various air-exchange-rates. Explanation of variance is quite high for non-viable particles (supply air: 37%, ambient air: 81%) but negligible for viable particles (bacteria) with values below 15%.
Measurement of hearing aid internal noise1
Lewis, James D.; Goodman, Shawn S.; Bentler, Ruth A.
2010-01-01
Hearing aid equivalent input noise (EIN) measures assume the primary source of internal noise to be located prior to amplification and to be constant regardless of input level. EIN will underestimate internal noise in the case that noise is generated following amplification. The present study investigated the internal noise levels of six hearing aids (HAs). Concurrent with HA processing of a speech-like stimulus with both adaptive features (acoustic feedback cancellation, digital noise reduction, microphone directionality) enabled and disabled, internal noise was quantified for various stimulus levels as the variance across repeated trials. Changes in noise level as a function of stimulus level demonstrated that (1) generation of internal noise is not isolated to the microphone, (2) noise may be dependent on input level, and (3) certain adaptive features may contribute to internal noise. Quantifying internal noise as the variance of the output measures allows for noise to be measured under real-world processing conditions, accounts for all sources of noise, and is predictive of internal noise audibility. PMID:20370034
Klápště, Jaroslav; Suontama, Mari; Telfer, Emily; Graham, Natalie; Low, Charlie; Stovold, Toby; McKinley, Russel; Dungey, Heidi
2017-01-01
Accurate inference of relatedness between individuals in breeding population contributes to the precision of genetic parameter estimates, effectiveness of inbreeding management and the amount of genetic progress delivered from breeding programs. Pedigree reconstruction has been proven to be an efficient tool to correct pedigree errors and recover hidden relatedness in open pollinated progeny tests but the method can be limited by the lack of parental genotypes and the high proportion of alien pollen from outside the breeding population. Our study investigates the efficiency of sib-ship reconstruction in an advanced breeding population of Eucalyptus nitens with only partially tracked pedigree. The sib-ship reconstruction allowed the identification of selfs (4% of the sample) and the exploration of their potential effect on inbreeding depression in the traits studied. We detected signs of inbreeding depression in diameter at breast height and growth strain while no indications were observed in wood density, wood stiffness and tangential air-dry shrinkage. After the application of a corrected sib-ship relationship matrix, additive genetic variance and heritability were observed to increase where signs of inbreeding depression were initially detected. Conversely, the same genetic parameters for traits that appeared to be free of inbreeding depression decreased in size. It therefore appeared that greater genetic variance may be due, at least in part, to contributions from inbreeding in these studied populations rather than a removal of inbreeding as is traditionally thought.
48 CFR 970.3200-1 - Reduction or suspension of advance, partial, or progress payments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Reduction or suspension of... System DEPARTMENT OF ENERGY AGENCY SUPPLEMENTARY REGULATIONS DOE MANAGEMENT AND OPERATING CONTRACTS... receiving, assessing, and making recommendations to the Senior Procurement Executive. ...
Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Houck, J. A.
1976-01-01
A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.
Effects of rotor model degradation on the accuracy of rotorcraft real time simulation
NASA Technical Reports Server (NTRS)
Houck, J. A.; Bowles, R. L.
1976-01-01
The effects are studied of degrading a rotating blade element rotor mathematical model to meet various real-time simulation requirements of rotorcraft. Three methods of degradation were studied: reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments, (2) reduction of number of blades, and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which the rotating blade element rotor mathematical model should not be used.
Observed and Predicted Risk of Breast Cancer Death in Randomized Trials on Breast Cancer Screening.
Autier, Philippe; Boniol, Mathieu; Smans, Michel; Sullivan, Richard; Boyle, Peter
2016-01-01
The role of breast screening in breast cancer mortality declines is debated. Screening impacts cancer mortality through decreasing the number of advanced cancers with poor diagnosis, while cancer treatment works through decreasing the case-fatality rate. Hence, reductions in cancer death rates thanks to screening should directly reflect reductions in advanced cancer rates. We verified whether in breast screening trials, the observed reductions in the risk of breast cancer death could be predicted from reductions of advanced breast cancer rates. The Greater New York Health Insurance Plan trial (HIP) is the only breast screening trial that reported stage-specific cancer fatality for the screening and for the control group separately. The Swedish Two-County trial (TCT)) reported size-specific fatalities for cancer patients in both screening and control groups. We computed predicted numbers of breast cancer deaths, from which we calculated predicted relative risks (RR) and (95% confidence intervals). The Age trial in England performed its own calculations of predicted relative risk. The observed and predicted RR of breast cancer death were 0.72 (0.56-0.94) and 0.98 (0.77-1.24) in the HIP trial, and 0.79 (0.78-1.01) and 0.90 (0.80-1.01) in the Age trial. In the TCT, the observed RR was 0.73 (0.62-0.87), while the predicted RR was 0.89 (0.75-1.05) if overdiagnosis was assumed to be negligible and 0.83 (0.70-0.97) if extra cancers were excluded. In breast screening trials, factors other than screening have contributed to reductions in the risk of breast cancer death most probably by reducing the fatality of advanced cancers in screening groups. These factors were the better management of breast cancer patients and the underreporting of breast cancer as the underlying cause of death. Breast screening trials should publish stage-specific fatalities observed in each group.
LaMothe, Jeremy; Baxter, Josh R; Gilbert, Susannah; Murphy, Conor I; Karnovsky, Sydney C; Drakos, Mark C
2017-06-01
Syndesmotic injuries can be associated with poor patient outcomes and posttraumatic ankle arthritis, particularly in the case of malreduction. However, ankle joint contact mechanics following a syndesmotic injury and reduction remains poorly understood. The purpose of this study was to characterize the effects of a syndesmotic injury and reduction techniques on ankle joint contact mechanics in a biomechanical model. Ten cadaveric whole lower leg specimens with undisturbed proximal tibiofibular joints were prepared and tested in this study. Contact area, contact force, and peak contact pressure were measured in the ankle joint during simulated standing in the intact, injured, and 3 reduction conditions: screw fixation with a clamp, screw fixation without a clamp (thumb technique), and a suture-button construct. Differences in these ankle contact parameters were detected between conditions using repeated-measures analysis of variance. Syndesmotic disruption decreased tibial plafond contact area and force. Syndesmotic reduction did not restore ankle loading mechanics to values measured in the intact condition. Reduction with the thumb technique was able to restore significantly more joint contact area and force than the reduction clamp or suture-button construct. Syndesmotic disruption decreased joint contact area and force. Although the thumb technique performed significantly better than the reduction clamp and suture-button construct, syndesmotic reduction did not restore contact mechanics to intact levels. Decreased contact area and force with disruption imply that other structures are likely receiving more loads (eg, medial and lateral gutters), which may have clinical implications such as the development of posttraumatic arthritis.
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.
Nascent RNA kinetics: Transient and steady state behavior of models of transcription
NASA Astrophysics Data System (ADS)
Choubey, Sandeep
2018-02-01
Regulation of transcription is a vital process in cells, but mechanistic details of this regulation still remain elusive. The dominant approach to unravel the dynamics of transcriptional regulation is to first develop mathematical models of transcription and then experimentally test the predictions these models make for the distribution of mRNA and protein molecules at the individual cell level. However, these measurements are affected by a multitude of downstream processes which make it difficult to interpret the measurements. Recent experimental advancements allow for counting the nascent mRNA number of a gene as a function of time at the single-inglr cell level. These measurements closely reflect the dynamics of transcription. In this paper, we consider a general mechanism of transcription with stochastic initiation and deterministic elongation and probe its impact on the temporal behavior of nascent RNA levels. Using techniques from queueing theory, we derive exact analytical expressions for the mean and variance of the nascent RNA distribution as functions of time. We apply these analytical results to obtain the mean and variance of nascent RNA distribution for specific models of transcription. These models of initiation exhibit qualitatively distinct transient behaviors for both the mean and variance which further allows us to discriminate between them. Stochastic simulations confirm these results. Overall the analytical results presented here provide the necessary tools to connect mechanisms of transcription initiation to single-cell measurements of nascent RNA.
Female scarcity reduces women's marital ages and increases variance in men's marital ages.
Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom
2010-08-05
When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.
Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.
2017-01-01
Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771
Ge, Jing; Zhang, Guoping
2015-01-01
Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.
Fan Noise Reduction: An Overview
NASA Technical Reports Server (NTRS)
Envia, Edmane
2001-01-01
Fan noise reduction technologies developed as part of the engine noise reduction element of the Advanced Subsonic Technology Program are reviewed. Developments in low-noise fan stage design, swept and leaned outlet guide vanes, active noise control, fan flow management, and scarfed inlet are discussed. In each case, a description of the method is presented and, where available, representative results and general conclusions are discussed. The review concludes with a summary of the accomplishments of the AST-sponsored fan noise reduction research and a few thoughts on future work.
Age-Related Gray and White Matter Changes in Normal Adult Brains
Farokhian, Farnaz; Yang, Chunlan; Beheshti, Iman; Matsuda, Hiroshi; Wu, Shuicai
2017-01-01
Normal aging is associated with both structural changes in many brain regions and functional declines in several cognitive domains with advancing age. Advanced neuroimaging techniques enable explorative analyses of structural alterations that can be used as assessments of such age-related changes. Here we used voxel-based morphometry (VBM) to investigate regional and global brain volume differences among four groups of healthy adults from the IXI Dataset: older females (OF, mean age 68.35 yrs; n=69), older males (OM, 68.43 yrs; n=66), young females (YF, 27.09 yrs; n=71), and young males (YM, 27.91 yrs; n=71), using 3D T1-weighted MRI data. At the global level, we investigated the influence of age and gender on brain volumes using a two-way analysis of variance. With respect to gender, we used the Pearson correlation to investigate global brain volume alterations due to age in the older and young groups. At the regional level, we used a flexible factorial statistical test to compare the means of gray matter (GM) and white matter (WM) volume alterations among the four groups. We observed different patterns in both the global and regional GM and WM alterations in the young and older groups with respect to gender. At the global level, we observed significant influences of age and gender on global brain volumes. At the regional level, the older subjects showed a widespread reduction in GM volume in regions of the frontal, insular, and cingulate cortices compared to the young subjects in both genders. Compared to the young subjects, the older subjects showed a widespread WM decline prominently in the thalamic radiations, in addition to increased WM in pericentral and occipital areas. Knowledge of these observed brain volume differences and changes may contribute to the elucidation of mechanisms underlying aging as well as age-related brain atrophy and disease. PMID:29344423
Nguyen, John T; Rich, Josiah D; Brockmann, Bradley W; Vohr, Fred; Spaulding, Anne; Montague, Brian T
2015-08-01
Hepatitis C virus (HCV) infection continues to disproportionately affect incarcerated populations. New HCV drugs present opportunities and challenges to address HCV in corrections. The goal of this study was to evaluate the impact of the treatment costs for HCV infection in a state correctional population through a budget impact analysis comparing differing treatment strategies. Electronic and paper medical records were reviewed to estimate the prevalence of hepatitis C within the Rhode Island Department of Corrections. Three treatment strategies were evaluated as follows: (1) treating all chronically infected persons, (2) treating only patients with demonstrated fibrosis, and (3) treating only patients with advanced fibrosis. Budget impact was computed as the percentage of pharmacy and overall healthcare expenditures accrued by total drug costs assuming entirely interferon-free therapy. Sensitivity analyses assessed potential variance in costs related to variability in HCV prevalence, genotype, estimated variation in market pricing, length of stay for the sentenced population, and uptake of newly available regimens. Chronic HCV prevalence was estimated at 17% of the total population. Treating all sentenced inmates with at least 6 months remaining of their sentence would cost about $34 million-13 times the pharmacy budget and almost twice the overall healthcare budget. Treating inmates with advanced fibrosis would cost about $15 million. A hypothetical 50% reduction in total drug costs for future therapies could cost $17 million to treat all eligible inmates. With immense costs projected with new treatment, it is unlikely that correctional facilities will have the capacity to treat all those afflicted with HCV. Alternative payment strategies in collaboration with outside programs may be necessary to curb this epidemic. In order to improve care and treatment delivery, drug costs also need to be seriously reevaluated to be more accessible and equitable now that HCV is more curable.
Kimme-Smith, C; Rothschild, P A; Bassett, L W; Gold, R H; Moler, C
1989-01-01
Six different combinations of film-processor temperature (33.3 degrees C, 35 degrees C), development time (22 sec, 44 sec), and chemistry (Du Pont medium contrast developer [MCD] and Kodak rapid process [RP] developer) were each evaluated by separate analyses with Hurter and Driffield curves, test images of plastic step wedges, noise variance analysis, and phantom images; each combination also was evaluated clinically. Du Pont MCD chemistry produced greater contrast than did Kodak RP chemistry. A change in temperature from 33.3 degrees C (92 degrees F) to 35 degrees C (95 degrees F) had the least effect on dose and image contrast. Temperatures of 36.7 degrees C (98 degrees F) and 38.3 degrees C (101 degrees F) also were tested with extended processing. The speed increased for 36.7 degrees C but decreased at 38.3 degrees C. Base plus fog increased, but contrast decreased for these higher temperatures. Increasing development time had the greatest effect on decreasing the dose required for equivalent film darkening when imaging BR12 breast equivalent test objects; ion chamber measurements showed a 32% reduction in dose when the development time was increased from 22 to 44 sec. Although noise variance doubled in images processed with the extended development time, diagnostic capability was not compromised. Extending the processing time for mammographic films was an effective method of dose reduction, whereas varying the processing temperature and chemicals had less effect on contrast and dose.
Mudallal, Rola H; Othman, Wafa'a M; Al Hassan, Nahid F
2017-01-01
Nurse burnout is a widespread phenomenon characterized by a reduction in nurses' energy that manifests in emotional exhaustion, lack of motivation, and feelings of frustration and may lead to reductions in work efficacy. This study was conducted to assess the level of burnout among Jordanian nurses and to investigate the influence of leader empowering behaviors (LEBs) on nurses' feelings of burnout in an endeavor to improve nursing work outcomes. A cross-sectional and correlational design was used. Leader Empowering Behaviors Scale and the Maslach Burnout Inventory (MBI) were employed to collect data from 407 registered nurses, recruited from 11 hospitals in Jordan. The Jordanian nurses exhibited high levels of burnout as demonstrated by their high scores for Emotional Exhaustion (EE) and Depersonalization (DP) and moderate scores for Personal Accomplishment (PA). Factors related to work conditions, nurses' demographic traits, and LEBs were significantly correlated with the burnout categories. A stepwise regression model-exposed 4 factors predicted EE: hospital type, nurses' work shift, providing autonomy, and fostering participation in decision making. Gender, fostering participation in decision making, and department type were responsible for 5.9% of the DP variance, whereas facilitating goal attainment and nursing experience accounted for 8.3% of the PA variance. This study highlights the importance of the role of nurse leaders in improving work conditions and empowering and motivating nurses to decrease nurses' feelings of burnout, reduce turnover rates, and improve the quality of nursing care.
Jeenah, M; September, W; Graadt van Roggen, F; de Villiers, W; Seftel, H; Marais, D
1993-01-04
Simvastatin, an inhibitor of HMG CoA reductase, lowers the plasma total cholesterol and LDL-cholesterol concentration in familial hypercholesterolemic patients. The efficacy of the drug shows considerable inter-individual variation, however. In this study we have assessed the influence of certain LDL-receptor gene mutations on this variation. A group of 20 male and female heterozygotic familial hypercholesterolemic patients, all Afrikaners and each bearing one of two different LDL receptor gene mutations, FH Afrikaner-1 (FH1) and FH Afrikaner-2 (FH2), was treated with simvastatin (40 mg once daily) for 18 months. The average reduction in total plasma cholesterol was 35.3% in the case of the FH2 men but only 23.2% in that of the FH1 men (P = 0.005); the reduction in LDL cholesterol concentrations was also greater in the FH2 group (39% as opposed to 27.1%, P = 0.02). The better response of the FH2 group was also evident when men and women were considered together. Female FH1 patients responded better to simvastatin treatment, however, than did males with the same gene defect. Mutations at the LDL-receptor locus may thus play a significant role in the variable efficacy of the drug. The particular mutations in the males of this group may have contributed up to 35% of the variance in total cholesterol response and 29% of the variance in LDL-cholesterol response to simvastatin treatment.
Mudallal, Rola H.; Othman, Wafa’a M.; Al Hassan, Nahid F.
2017-01-01
Nurse burnout is a widespread phenomenon characterized by a reduction in nurses’ energy that manifests in emotional exhaustion, lack of motivation, and feelings of frustration and may lead to reductions in work efficacy. This study was conducted to assess the level of burnout among Jordanian nurses and to investigate the influence of leader empowering behaviors (LEBs) on nurses’ feelings of burnout in an endeavor to improve nursing work outcomes. A cross-sectional and correlational design was used. Leader Empowering Behaviors Scale and the Maslach Burnout Inventory (MBI) were employed to collect data from 407 registered nurses, recruited from 11 hospitals in Jordan. The Jordanian nurses exhibited high levels of burnout as demonstrated by their high scores for Emotional Exhaustion (EE) and Depersonalization (DP) and moderate scores for Personal Accomplishment (PA). Factors related to work conditions, nurses’ demographic traits, and LEBs were significantly correlated with the burnout categories. A stepwise regression model–exposed 4 factors predicted EE: hospital type, nurses’ work shift, providing autonomy, and fostering participation in decision making. Gender, fostering participation in decision making, and department type were responsible for 5.9% of the DP variance, whereas facilitating goal attainment and nursing experience accounted for 8.3% of the PA variance. This study highlights the importance of the role of nurse leaders in improving work conditions and empowering and motivating nurses to decrease nurses’ feelings of burnout, reduce turnover rates, and improve the quality of nursing care. PMID:28844166
Integrated System Test of the Advanced Instructional System (AIS). Final Report.
ERIC Educational Resources Information Center
Lintz, Larry M.; And Others
The integrated system test for the Advanced Instructional System (AIS) was designed to provide quantitative information regarding training time reductions resulting from certain computer managed instruction features. The reliabilities of these features and of support systems were also investigated. Basic computer managed instruction reduced…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
...) provide that employers in a state that has an outstanding balance of advances under Title XII of the... such January 1 occurs, if a balance of advances remains at the beginning of November 10 of that year...
Reductive dehalogenation of disinfection byproducts by an activated carbon-based electrode system.
Li, Yuanqing; Kemper, Jerome M; Datuin, Gwen; Akey, Ann; Mitch, William A; Luthy, Richard G
2016-07-01
Low molecular weight, uncharged, halogenated disinfection byproducts (DBPs) are poorly removed by the reverse osmosis and advanced oxidation process treatment units often applied for further treatment of municipal wastewater for potable reuse. Granular activated carbon (GAC) treatment effectively sorbed 22 halogenated DBPs. Conversion of the GAC to a cathode within an electrolysis cell resulted in significant degradation of the 22 halogenated DBPs by reductive electrolysis at -1 V vs. Standard Hydrogen Electrode (SHE). The lowest removal efficiency over 6 h electrolysis was for trichloromethane (chloroform; 47%) but removal efficiencies were >90% for 13 of the 22 DBPs. In all cases, DBP degradation was higher than in electrolysis-free controls, and degradation was verified by the production of halides as reduction products. Activated carbons and charcoal were more effective than graphite for electrolysis, with graphite featuring poor sorption for the DBPs. A subset of halogenated DBPs (e.g., haloacetonitriles, chloropicrin) were degraded upon sorption to the GAC, even without electrolysis. Using chloropicrin as a model, experiments indicated that this loss was attributable to the partial reduction of sorbed chloropicrin from reducing equivalents in the GAC. Reducing equivalents depleted by these reactions could be restored when the GAC was treated by reductive electrolysis. GAC treatment of an advanced treatment train effluent for potable reuse effectively reduced the concentrations of chloroform, bromodichloromethane and dichloroacetonitrile measured in the column influent to below the method detection limits. Treatment of the GAC by reductive electrolysis at -1 V vs. SHE over 12 h resulted in significant degradation of the chloroform (63%), bromodichloromethane (96%) and dichloroacetonitrile (99%) accumulated on the GAC. The results suggest that DBPs in advanced treatment train effluents could be captured and degraded continuously by reductive electrolysis using a GAC-based cathode. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.
Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E
2017-12-11
Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
Irreducible Uncertainty in Terrestrial Carbon Projections
NASA Astrophysics Data System (ADS)
Lovenduski, N. S.; Bonan, G. B.
2016-12-01
We quantify and isolate the sources of uncertainty in projections of carbon accumulation by the ocean and terrestrial biosphere over 2006-2100 using output from Earth System Models participating in the 5th Coupled Model Intercomparison Project. We consider three independent sources of uncertainty in our analysis of variance: (1) internal variability, driven by random, internal variations in the climate system, (2) emission scenario, driven by uncertainty in future radiative forcing, and (3) model structure, wherein different models produce different projections given the same emission scenario. Whereas uncertainty in projections of ocean carbon accumulation by 2100 is 100 Pg C and driven primarily by emission scenario, uncertainty in projections of terrestrial carbon accumulation by 2100 is 50% larger than that of the ocean, and driven primarily by model structure. This structural uncertainty is correlated with emission scenario: the variance associated with model structure is an order of magnitude larger under a business-as-usual scenario (RCP8.5) than a mitigation scenario (RCP2.6). In an effort to reduce this structural uncertainty, we apply various model weighting schemes to our analysis of variance in terrestrial carbon accumulation projections. The largest reductions in uncertainty are achieved when giving all the weight to a single model; here the uncertainty is of a similar magnitude to the ocean projections. Such an analysis suggests that this structural uncertainty is irreducible given current terrestrial model development efforts.
Massage and Reiki used to reduce stress and anxiety: Randomized Clinical Trial
Kurebayashi, Leonice Fumiko Sato; Turrini, Ruth Natalia Teresa; de Souza, Talita Pavarini Borges; Takiguchi, Raymond Sehiji; Kuba, Gisele; Nagumo, Marisa Toshi
2016-01-01
ABTRACT Objective: to evaluate the effectiveness of massage and reiki in the reduction of stress and anxiety in clients at the Institute for Integrated and Oriental Therapy in Sao Paulo (Brazil). Method: clinical tests randomly done in parallel with an initial sample of 122 people divided into three groups: Massage + Rest (G1), Massage + Reiki (G2) and a Control group without intervention (G3). The Stress Systems list and the Trace State Anxiety Inventory were used to evaluate the groups at the start and after 8 sessions (1 month), during 2015. Results: there were statistical differences (p = 0.000) according to the ANOVA (Analysis of Variance) for the stress amongst the groups 2 and 3 (p = 0.014) with a 33% reductions and a Cohen of 0.78. In relation to anxiety-state, there was a reduction in the intervention groups compared with the control group (p < 0.01) with a 21% reduction in group 2 (Cohen of 1.18) and a 16% reduction for group 1 (Cohen of 1.14). Conclusion: Massage + Reiki produced better results amongst the groups and the conclusion is for further studies to be done with the use of a placebo group to evaluate the impact of the technique separate from other techniques. RBR-42c8wp PMID:27901219
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
... Development or Prototype Units (DFARS Case 2009-D034) AGENCY: Defense Acquisition Regulations System... the procurement of prototype units. The DFARS implementation places specific limits, in accordance... components or the procurement of prototype units. IV. Paperwork Reduction Act The Paperwork Reduction Act...
NASA Technical Reports Server (NTRS)
Whitehead, A. H., Jr.
1978-01-01
Current domestic and international air cargo operations are studied and the characteristics of 1990 air cargo demand are postulated from surveys conducted at airports and with shippers, consignees, and freight forwarders as well as air, land, and ocean carriers. Simulation and route optimization programs are exercised to evaluate advanced aircraft concepts. The results show that proposed changes in the infrastructure and improved cargo loading efficiencies are as important enhancing the prospects of air cargo growth as is the advent of advanced freighter aircraft. Potential reductions in aircraft direct operating costs are estimated and related to future total revenue. Service and cost elasticities are established and utilized to estimate future potential tariff reductions that may be realized through direct and indirect operating cost reductions and economies of scale.
Recent advances in fixation of the craniomaxillofacial skeleton.
Meslemani, Danny; Kellman, Robert M
2012-08-01
Fixation of the craniomaxillofacial skeleton is an evolving aspect for facial plastic, oral and maxillofacial, and plastic surgery. This review looks at the recent advances that aid in reduction and fixation of the craniomaxillofacial skeleton. More surgeons are using resorbable plates for craniomaxillofacial fixation. A single miniplate on the inferior border of the mandible may be sufficient to reduce and fixate an angle fracture. Percutaneous K-wires may assist in plating angle fractures. Intraoperative computed tomography (CT) may prove to be useful for assessing reduction and fixation. Resorbable plates are becoming increasingly popular in orthognathic surgery and facial trauma surgery. There are newer operative techniques for fixating the angle of the mandible. Also, the utilization of the intraoperative CT provides immediate feedback for accurate reduction and fixation. Prebent surgical plates save operative time, decrease errors, and provide more accurate fixation.
Ke, Li-Shan; Huang, Xiaoyan; Hu, Wen-Yu; O'Connor, Margaret; Lee, Susan
2017-05-01
Studies have indicated that family members or health professionals may not know or predict their older relatives' or patients' health preferences. Although advance care planning is encouraged for older people to prepare end-of-life care, it is still challenging. To understand the experiences and perspectives of older people regarding advance care planning. A systematic review of qualitative studies and meta-synthesis was conducted. CINAHL, MEDLINE, EMBASE, and PsycINFO databases were searched. A total of 50 articles were critically appraised and a thematic synthesis was undertaken. Four themes were identified: life versus death, internal versus external, benefits versus burdens, and controlling versus being controlled. The view of life and death influenced older people's willingness to discuss their future. The characteristics, experiences, health status, family relationship, and available resources also affected their plans of advance care planning. Older people needed to balance the benefits and burdens of advance care planning, and then judge their own ability to make decisions about end-of-life care. Older people's perspectives and experiences of advance care planning were varied and often conflicted; cultural differences amplified variances among older people. Truthful information, available resources, and family support are needed to enable older people to maintain dignity at the end of life. The views of life and death for older people from different cultures should be compared to assist health professionals to understand older people's attitudes toward advance care planning, and thus to develop appropriate strategies to promote advance care planning in different cultures.
A class of optimum digital phase locked loops for the DSN advanced receiver
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Kumar, R.
1985-01-01
A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.
Rewiring the Carbon Economy: Engineered Carbon Reduction Listening Day Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Illing, Lauren; Natelson, Robert; Resch, Michael
On July 8, 2017, the U.S. Department of Energy’s Bioenergy Technologies Office (BETO) sponsored the Engineered Carbon Reduction Listening Day: Advanced Strategies to Bypass Land Use for the Emerging Bioeconomy in La Jolla, California. This event explored non-photosynthetic carbon dioxide–reduction technologies, including electrocatalytic, thermocatalytic, photocatalytic, and biocatalytic approaches. BETO has summarized stakeholder input from the listening day in a summary report.
Case Study of Cardiovascular Risk Reduction in the Northwest Region and TRICARE Region 11
2003-11-01
and TRICARE Region 11. The second employee is not directly hired for cardiovascular risk reduction , but for tobacco cessation classes and consultation...Canadians with diabetes mellitus . Advances in Cardiovascular Risk Reduction 67 Experimental Medicine and Biology, 373-380...does not display a currently valid OMB control number. 1. REPORT DATE JUN 2003 2 . REPORT TYPE Final 3. DATES COVERED Jul 2002 - Jul 2003 4
Advances in Photocatalytic CO2 Reduction with Water: A Review
Nahar, Samsun; Zain, M. F. M.; Kadhum, Abdul Amir H.; Hasan, Hassimi Abu; Hasan, Md. Riad
2017-01-01
In recent years, the increasing level of CO2 in the atmosphere has not only contributed to global warming but has also triggered considerable interest in photocatalytic reduction of CO2. The reduction of CO2 with H2O using sunlight is an innovative way to solve the current growing environmental challenges. This paper reviews the basic principles of photocatalysis and photocatalytic CO2 reduction, discusses the measures of the photocatalytic efficiency and summarizes current advances in the exploration of this technology using different types of semiconductor photocatalysts, such as TiO2 and modified TiO2, layered-perovskite Ag/ALa4Ti4O15 (A = Ca, Ba, Sr), ferroelectric LiNbO3, and plasmonic photocatalysts. Visible light harvesting, novel plasmonic photocatalysts offer potential solutions for some of the main drawbacks in this reduction process. Effective plasmonic photocatalysts that have shown reduction activities towards CO2 with H2O are highlighted here. Although this technology is still at an embryonic stage, further studies with standard theoretical and comprehensive format are suggested to develop photocatalysts with high production rates and selectivity. Based on the collected results, the immense prospects and opportunities that exist in this technique are also reviewed here. PMID:28772988
Clean Cities The mission of Clean Cities is to advance the energy, economic, and environmental petroleum in the transportation sector. Clean Cities carries out this mission through a network of nearly advanced vehicles, fuel blends, fuel economy, hybrid vehicles, and idle reduction. Clean Cities provides
Crawford, D C; Bell, D S; Bamber, J C
1993-01-01
A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Nitrogen reduction pathways in estuarine sediments: Influences of organic carbon and sulfide
NASA Astrophysics Data System (ADS)
Plummer, Patrick; Tobias, Craig; Cady, David
2015-10-01
Potential rates of sediment denitrification, anaerobic ammonium oxidation (anammox), and dissimilatory nitrate reduction to ammonium (DNRA) were mapped across the entire Niantic River Estuary, CT, USA, at 100-200 m scale resolution consisting of 60 stations. On the estuary scale, denitrification accounted for ~ 90% of the nitrogen reduction, followed by DNRA and anammox. However, the relative importance of these reactions to each other was not evenly distributed through the estuary. A Nitrogen Retention Index (NIRI) was calculated from the rate data (DNRA/(denitrification + anammox)) as a metric to assess the relative amounts of reactive nitrogen being recycled versus retained in the sediments following reduction. The distribution of rates and accompanying sediment geochemical analytes suggested variable controls on specific reactions, and on the NIRI, depending on position in the estuary and that these controls were linked to organic carbon abundance, organic carbon source, and pore water sulfide concentration. The relationship between NIRI and organic carbon abundance was dependent on organic carbon source. Sulfide proved the single best predictor of NIRI, accounting for 44% of its observed variance throughout the whole estuary. We suggest that as a single metric, sulfide may have utility as a proxy for gauging the distribution of denitrification, anammox, and DNRA.
Idle Reduction Equipment Excise Tax Exemption Qualified on-board idle reduction devices and advanced insulation are exempt from the federal excise tax imposed on the retail sale of heavy-duty highway ) SmartWay Technology Program Federal Excise Tax Exemption website. The exemption applies to equipment that
45 CFR 156.215 - Advance payments of the premium tax credit and cost-sharing reduction standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cost-sharing reduction standards. 156.215 Section 156.215 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification...
45 CFR 156.215 - Advance payments of the premium tax credit and cost-sharing reduction standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... cost-sharing reduction standards. 156.215 Section 156.215 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification...
20 CFR 606.26 - Application for waiver and substitution.
Code of Federal Regulations, 2010 CFR
2010-04-01
... CREDITS UNDER THE FEDERAL UNEMPLOYMENT TAX ACT; ADVANCES UNDER TITLE XII OF THE SOCIAL SECURITY ACT Relief From Tax Credit Reduction § 606.26 Application for waiver and substitution. (a) Application. The... year, will notify the applicant and the Secretary of the Treasury of the resulting tax credit reduction...
Time Series Model Identification and Prediction Variance Horizon.
1980-06-01
stationary time series Y(t). -6- In terms of p(v), the definition of the three time series memory types is: No Memory Short Memory Long Memory X IP (v)I 0 0...X lp(v)l < - I IP (v) = v=1 v=l v=l Within short memory time series there are three types whose classification in terms of correlation functions is...1974) "Some Recent Advances in Time Series Modeling", TEEE Transactions on Automatic ControZ, VoZ . AC-19, No. 6, December, 723-730. Parzen, E. (1976) "An
Safety evaluation of joint and conventional lane merge configurations for freeway work zones.
Ishak, Sherif; Qi, Yan; Rayaprolu, Pradeep
2012-01-01
Inefficient operation of traffic in work zone areas not only leads to an increase in travel time delays, queue length, and fuel consumption but also increases the number of forced merges and roadway accidents. This study evaluated the safety performance of work zones with a conventional lane merge (CLM) configuration in Louisiana. Analysis of variance (ANOVA) was used to compare the crash rates for accidents involving fatalities, injuries, and property damage only (PDO) in each of the following 4 areas: (1) advance warning area, (2) transition area, (3) work area, and (4) termination area. The analysis showed that the advance warning area had higher fatality, injury, and PDO crash rates when compared to the transition area, work area, and termination area. This finding confirmed the need to make improvements in the advance warning area where merging maneuvers take place. Therefore, a new lane merge configuration, called joint lane merge (JLM), was proposed and its safety performance was examined and compared to the conventional lane merge configuration using a microscopic simulation model (VISSIM), which was calibrated with real-world data from an existing work zone on I-55 and used to simulate a total of 25 different scenarios with different levels of demand and traffic composition. Safety performance was evaluated using 2 surrogate measures: uncomfortable decelerations and speed variance. Statistical analysis was conducted to determine whether the differences in safety performance between both configurations were significant. The safety analysis indicated that JLM outperformed CLM in most cases with low to moderate flow rates and that the percentage of trucks did not have a significant impact on the safety performance of either configuration. Though the safety analysis did not clearly indicate which lane merge configuration is safer for the overall work zone area, it was able to identify the possibly associated safety changes within the work zone area under different traffic conditions. Copyright © 2012 Taylor & Francis Group, LLC
Patterns and Predictors of Growth in Divorced Fathers' Health Status and Substance Use
DeGarmo, David S.; Reid, John B.; Leve, Leslie D.; Chamberlain, Patricia; Knutson, John F.
2009-01-01
Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed. PMID:19477763
An Overview of Low-Emission Combustion Research
NASA Technical Reports Server (NTRS)
DelRosario, Ruben
2014-01-01
An overview of research efforts at NASA Glenn Research Center (GRC) in low-emission combustion technology that have made a significant impact on the Nitrogen Oxides (NOx) emission reduction in aircraft propulsion will be presented. The technology advancements and their impact on aircraft emissions will be discussed in the context of NASAs Aeronautics Research Mission Directorate (ARMD) high-level goals in fuel burn, noise and emission reductions. The highlights of the research presented will show how the past and current efforts have laid the foundation for the engines that are flying today as well as how the continued technology advancements will significantly influence the next generation of aviation propulsion system designs.
An Overview of Low-Emission Combustion Research at NASA Glenn
NASA Technical Reports Server (NTRS)
Reddy, Dhanireddy R.; Lee, Chi-Ming
2016-01-01
An overview of research efforts at NASA Glenn Research Center (GRC) in low-emission combustion technology that have made a significant impact on the nitrogen oxides (NOx) emission reduction in aircraft propulsion is presented. The technology advancements and their impact on aircraft emissions are discussed in the context of NASA's Aeronautics Research Mission Directorate (ARMD) high-level goals in fuel burn, noise and emission reductions. The highlights of the research presented here show how the past and current efforts laid the foundation for the engines that are flying today as well as how the continued technology advancements will significantly influence the next generation of aviation propulsion system designs.
Overview of Low Emission Combustion Research At NASA Glenn
NASA Technical Reports Server (NTRS)
Reddy, D. R.
2016-01-01
An overview of research efforts at NASA Glenn Research Center (GRC) in low-emission combustion technology that have made a significant impact on the nitrogen oxides (NOx) emission reduction in aircraft propulsion is presented. The technology advancements and their impact on aircraft emissions are discussed in the context of NASA's Aeronautics Research Mission Directorate (ARMD) high-level goals in fuel burn, noise and emission reductions. The highlights of the research presented here show how the past and current efforts laid the foundation for the engines that are flying today as well as how the continued technology advancements will significantly influence the next generation of aviation propulsion system designs.
Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Crocker, Andrew M.; Greene, William D.
2017-01-01
The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. During the ABEDRR effort, the Dynetics Team has modified flight-proven Apollo-Saturn F-1 engine components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically-produced engine that could potentially both replace the RD-180 on Atlas V and satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article. The test article is scheduled to complete fabrication and assembly soon and continue testing through late 2019. Dynetics has also designed, developed, and built innovative tank and structure assemblies using friction stir welding to leverage recent NASA investments in manufacturing tools, facilities, and processes, significantly reducing development and recurring costs. The full-scale cryotank assembly was used to verify the structural design and prove affordable processes. Dynetics performed hydrostatic and cryothermal proof tests on the assembly to verify the assembly meets performance requirements..
NASA Technical Reports Server (NTRS)
Keiter, I. D.
1982-01-01
Studies of several General Aviation aircraft indicated that the application of advanced technologies to General Aviation propellers can reduce fuel consumption in future aircraft by a significant amount. Propeller blade weight reductions achieved through the use of composites, propeller efficiency and noise improvements achieved through the use of advanced concepts and improved propeller analytical design methods result in aircraft with lower operating cost, acquisition cost and gross weight.
Sometimes There Is No Most-Vital Arc: Assessing and Improving the Operational Resilience of Systems
2013-01-01
this problem in which the reduction in capacity on each in- terdicted arc is a random variable with known mean and variance , and the overall goal is...capacities will allow. A classic result in the theory of network flows states that the maximum flow volume is equal to the min- imum capacity of any cut, where...attack capability provides a natural means to assess the resilience of the system as a whole. This analysis yields a corollary result that common-sense
A preliminary study of the benefits of flying by ground speed during final approach
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.
1978-01-01
A study was conducted to evaluate the benefits of an approach technique which utilized constant ground speed on approach. It was determined that the technique reduced the capacity losses in headwinds experienced with the currently used constant airspeed technique. The benefits of technique were found to increase as headwinds increased and as the wake avoidance separation intervals were reduced. An additional benefit noted for the constant ground speed technique was a reduction in stopping distance variance due to the approach wind environment.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Control of large flexible structures - An experiment on the NASA Mini-Mast facility
NASA Technical Reports Server (NTRS)
Hsieh, Chen; Kim, Jae H.; Liu, Ketao; Zhu, Guoming; Skelton, Robert E.
1991-01-01
The output variance constraint controller design procedure is integrated with model reduction by modal cost analysis. A procedure is given for tuning MIMO controller designs to find the maximal rms performance of the actual system. Controller designs based on a finite-element model of the system are compared with controller designs based on an identified model (obtained using the Q-Markov Cover algorithm). The identified model and the finite-element model led to similar closed-loop performance, when tested in the Mini-Mast facility at NASA Langley.
Does asymmetric correlation affect portfolio optimization?
NASA Astrophysics Data System (ADS)
Fryd, Lukas
2017-07-01
The classical portfolio optimization problem does not assume asymmetric behavior of relationship among asset returns. The existence of asymmetric response in correlation on the bad news could be important information in portfolio optimization. The paper applies Dynamic conditional correlation model (DCC) and his asymmetric version (ADCC) to propose asymmetric behavior of conditional correlation. We analyse asymmetric correlation among S&P index, bonds index and spot gold price before mortgage crisis in 2008. We evaluate forecast ability of the models during and after mortgage crisis and demonstrate the impact of asymmetric correlation on the reduction of portfolio variance.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Infrared Measurement Variability Analysis.
1980-09-01
collecting optics of the measurement system. The first equation for tile blackbody experiment has the form 4.0 pim _ Ae W ,T) r(X,D) 3.5 pm - 4.0 pm JrD2 f3.5...potential for noise reduction by identifying and reducing contributing system effects. The measurement variance ott . of an infinite population of possible...irradiance can be written 4.0 pm I r()A A+ A ) 2 4.0 X C1(, = W(XT + AT)d 3.5 pim I since c + Af =2 r +Ar I Using the two expressions juSt devclopCd
Generalized Reich-Moore R-matrix approximation
NASA Astrophysics Data System (ADS)
Arbanas, Goran; Sobes, Vladimir; Holcomb, Andrew; Ducru, Pablo; Pigni, Marco; Wiarda, Dorothea
2017-09-01
A conventional Reich-Moore approximation (RMA) of R-matrix is generalized into a manifestly unitary form by introducing a set of resonant capture channels treated explicitly in a generalized, reduced R-matrix. A dramatic reduction of channel space witnessed in conventional RMA, from Nc × Nc full R-matrix to Np × Np reduced R-matrix, where Nc = Np + Nγ, Np and Nγ denoting the number of particle and γ-ray channels, respectively, is due to Np < Nγ. A corresponding reduction of channel space in generalized RMA (GRMA) is from Nc × Nc full R-matrix to N × N, where N = Np + N, and where N is the number of capture channels defined in GRMA. We show that N = Nλ where Nλ is the number of R-matrix levels. This reduction in channel space, although not as dramatic as in the conventional RMA, could be significant for medium and heavy nuclides where N < Nγ. The resonant capture channels defined by GRMA accommodate level-level interference (via capture channels) neglected in conventional RMA. The expression for total capture cross section in GRMA is formally equal to that of the full Nc × NcR-matrix. This suggests that GRMA could yield improved nuclear data evaluations in the resolved resonance range at a cost of introducing N(N - 1)/2 resonant capture width parameters relative to conventional RMA. Manifest unitarity of GRMA justifies a method advocated by Fröhner and implemented in the SAMMY nuclear data evaluation code for enforcing unitarity of conventional RMA. Capture widths of GRMA are exactly convertible into alternative R-matrix parameters via Brune tranform. Application of idealized statistical methods to GRMA shows that variance among conventional RMA capture widths in extant RMA evaluations could be used to estimate variance among off-diagonal elements neglected by conventional RMA. Significant departure of capture widths from an idealized distribution may indicate the presence of underlying doorway states.
Patil, Pravinkumar G; Hazarey, Vinay; Chaudhari, Rekha; Nimbalkar-Patil, Smita
2016-12-01
To evaluate effect of ice-cream stick exercise regimen with or without a mouth-exercising device (MED) on mucosal burning sensation in oral submucous fibrosis. In total, 282 patients with oral submucous fibrosis were treated with topical corticosteroid and oral antioxidant and the ice-cream stick exercise regimen. Patients in subgroups A1, A2, and A3 were additionally given a new MED. Patients in subgroups A1 and B1 patients with interincisal distance (IID) of 20 to 35 mm were managed without any additional therapy; patients in subgroups A2 and B2 with IID of 20 to 35 mm were additionally managed with intralesional injections; and those in subgroups A3 and B3 with IID less than 20 mm were managed surgically. Subjective evaluation of decrease in the oral mucosal burning was measured on a visual analogue scale (VAS). Analysis of variance and Tukey's multiple post hoc analysis were carried out to present the results. Patients using the MED, that is, subgroups A1, A2, and A3, showed reduction in burning sensation in the range of 64.8% to 71.1% and 27.8% to 30.9%, whereas in subgroups B1, B2, and B3, reduction in burning sensation ranged from 64.7% to 69.9% and from 29.3% to 38.6% after 6 months. The wo-way analysis of variance indicated statistically significant results in changes in initial VAS scores to 6-monthly VAS scores between MED users and non-MED users. The MED helps to enhance the rate of reduction of mucosal burning sensation, in addition to the conventional ice-cream stick regimen, as an adjunct to local and surgical treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle
NASA Astrophysics Data System (ADS)
Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.
2017-12-01
Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.
Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2015-01-25
In this work, Raman hyperspectral images and multivariate curve resolution-alternating least squares (MCR-ALS) are used to study the distribution of actives and excipients within a pharmaceutical drug product. This article is mainly focused on the distribution of a low dose constituent. Different approaches are compared, using initially filtered or non-filtered data, or using a column-wise augmented dataset before starting the MCR-ALS iterative process including appended information on the low dose component. In the studied formulation, magnesium stearate is used as a lubricant to improve powder flowability. With a theoretical concentration of 0.5% (w/w) in the drug product, the spectral variance contained in the data is weak. By using a principal component analysis (PCA) filtered dataset as a first step of the MCR-ALS approach, the lubricant information is lost in the non-explained variance and its associated distribution in the tablet cannot be highlighted. A sufficient number of components to generate the PCA noise-filtered matrix has to be used in order to keep the lubricant variability within the data set analyzed or, otherwise, work with the raw non-filtered data. Different models are built using an increasing number of components to perform the PCA reduction. It is shown that the magnesium stearate information can be extracted from a PCA model using a minimum of 20 components. In the last part, a column-wise augmented matrix, including a reference spectrum of the lubricant, is used before starting MCR-ALS process. PCA reduction is performed on the augmented matrix, so the magnesium stearate contribution is included within the MCR-ALS calculations. By using an appropriate PCA reduction, with a sufficient number of components, or by using an augmented dataset including appended information on the low dose component, the distribution of the two actives, the two main excipients and the low dose lubricant are correctly recovered. Copyright © 2014 Elsevier B.V. All rights reserved.
Experimental clean combustor program, phase 1
NASA Technical Reports Server (NTRS)
Bahr, D. W.; Gleason, C. C.
1975-01-01
Full annular versions of advanced combustor designs, sized to fit within the CF6-50 engine, were defined, manufactured, and tested at high pressure conditions. Configurations were screened, and significant reductions in CO, HC, and NOx emissions levels were achieved with two of these advanced combustor design concepts. Emissions and performance data at a typical AST cruise condition were also obtained along with combustor noise data as a part of an addendum to the basic program. The two promising combustor design approaches evolved in these efforts were the Double Annular Combustor and the Radial/Axial Combustor. With versions of these two basic combustor designs, CO and HC emissions levels at or near the target levels were obtained. Although the low target NOx emissions level was not obtained with these two advanced combustor designs, significant reductions were relative to the NOx levels of current technology combustors. Smoke emission levels below the target value were obtained.
Do players and staff sleep more during the pre- or competitive season of elite rugby league?
Caia, Johnpaul; Scott, Tannath J; Halson, Shona L; Kelly, Vincent G
2017-09-01
This study establishes the sleep behaviour of players and staff during the pre- and competitive seasons of elite rugby league. For seven days during both the pre- and competitive seasons, seven rugby league players and nine full-time staff from one professional Australian rugby league club had their sleep monitored via wrist actigraphy and self-report sleep diaries. Two-way repeated measures analysis of variance determined differences between the pre- and competitive season in players and staff, with effect sizes (ES) used to interpret the practical magnitude of differences. Findings show an earlier bed time and wake time for players (-34 min, ES = 1.5; ±0.5 and -39 min, 2.1; ±0.5 respectively) and staff (-29 min, ES = 0.8; ±0.3 and -35 min, ES = 1.7; ±0.4 respectively) during pre-season when compared to the competitive season. Despite this, no differences were seen when considering the amount of time in bed, sleep duration or sleep efficiency obtained between the pre- and competitive seasons. Our results suggest that early morning training sessions scheduled during pre-season advances wake time in elite rugby league. However, both players and staff can aim to avoid reductions in sleep duration and sleep efficiency with subsequent adjustment of night time sleep patterns. This may be particularly pertinent for staff, who wake earlier than players during both the pre- and competitive seasons.
Metabolic alterations in patients with Parkinson disease and visual hallucinations.
Boecker, Henning; Ceballos-Baumann, Andres O; Volk, Dominik; Conrad, Bastian; Forstl, Hans; Haussermann, Peter
2007-07-01
Visual hallucinations (VHs) occur frequently in advanced stages of Parkinson disease (PD). Which brain regions are affected in PD with VH is not well understood. To characterize the pattern of affected brain regions in PD with VH and to determine whether functional changes in PD with VH occur preferentially in visual association areas, as is suggested by the complex clinical symptomatology. Positron emission tomography measurements using fluorodeoxyglucose F 18. Between-group statistical analysis, accounting for the variance related to disease stage. University hospital. Patients Eight patients with PD and VH and 11 patients with PD without VH were analyzed. The presence of VH during the month before positron emission tomography was rated using the Neuropsychiatric Inventory subscale for VH (PD and VH, 4.63; PD without VH, 0.00; P < .002). Parkinson disease with VH, compared with PD without VH, was characterized by reduction in the regional cerebral metabolic rate for glucose consumption (P < .05, corrected for false discovery rate) in occipitotemporoparietal regions, sparing the occipital pole. No significant increase in regional glucose metabolism was detected in patients with PD and VH. The pattern of resting-state metabolic changes in regions of the dorsal and ventral visual streams, but not in primary visual cortex, in patients with PD and VH, is compatible with the functional roles of visual association areas in higher-order visual processing. These findings may help to further elucidate the functional mechanisms underlying VH in PD.
Distal radius osteotomy with volar locking plates based on computer simulation.
Miyake, Junichi; Murase, Tsuyoshi; Moritomo, Hisao; Sugamoto, Kazuomi; Yoshikawa, Hideki
2011-06-01
Corrective osteotomy using dorsal plates and structural bone graft usually has been used for treating symptomatic distal radius malunions. However, the procedure is technically demanding and requires an extensive dorsal approach. Residual deformity is a relatively frequent complication of this technique. We evaluated the clinical applicability of a three-dimensional osteotomy using computer-aided design and manufacturing techniques with volar locking plates for distal radius malunions. Ten patients with metaphyseal radius malunions were treated. Corrective osteotomy was simulated with the help of three-dimensional bone surface models created using CT data. We simulated the most appropriate screw holes in the deformed radius using computer-aided design data of a locking plate. During surgery, using a custom-made surgical template, we predrilled the screw holes as simulated. After osteotomy, plate fixation using predrilled screw holes enabled automatic reduction of the distal radial fragment. Autogenous iliac cancellous bone was grafted after plate fixation. The median volar tilt, radial inclination, and ulnar variance improved from -20°, 13°, and 6 mm, respectively, before surgery to 12°, 24°, and 1 mm, respectively, after surgery. The median wrist flexion improved from 33° before surgery to 60° after surgery. The median wrist extension was 70° before surgery and 65° after surgery. All patients experienced wrist pain before surgery, which disappeared or decreased after surgery. Surgeons can operate precisely and easily using this advanced technique. It is a new treatment option for malunion of distal radius fractures.
NASA Technical Reports Server (NTRS)
Maddalon, D. V.
1974-01-01
Questions concerning the energy efficiency of aircraft compared to ground transport are considered, taking into account as energy intensity the energy consumed per passenger statute mile. It is found that today's transport aircraft have an energy intensity potential comparable to that of ground modes. Possibilities for improving the energy density are also much better in the case of aircraft than in the case of ground transportation. Approaches for potential reductions in aircraft energy consumption are examined, giving attention to steps for increasing the efficiency of present aircraft and to reductions in energy intensity obtainable by the introduction of new aircraft utilizing an advanced technology. The use of supercritical aerodynamics is discussed along with the employment of composite structures, advances in propulsion systems, and the introduction of very large aircraft. Other improvements in fuel economy can be obtained by a reduction of skin-friction drag and a use of hydrogen fuel.
The membrane biofilm reactor: the natural partnership of membranes and biofilm.
Rittmann, B E
2006-01-01
Many exciting new technologies for water-quality control combine microbiological processes with adsorption, advanced oxidation, a membrane or an electrode to improve performance, address emerging contaminants or capture renewable energy. An excellent example is the H2-based membrane biofilm reactor (MBfR), which delivers H2 gas to a biofilm that naturally accumulates on the outer surface of a bubbleless membrane. Autotrophic bacteria in the biofilm oxidise the H2 and use the electrons to reduce NO3-, CIO4- and other oxidised contaminants. This natural partnership of membranes and biofilm makes it possible to gain many cost, performance and simplicity advantages from using H2 as the electron donor for microbially catalysed reductions. The MBfR has been demonstrated for denitrification in drinking water; reduction of perchlorate in groundwater; reduction of selenate, chromate, trichloroethene and other emerging contaminants; advanced N removal in wastewater treatment and autotrophic total-N removal.
Applications of GARCH models to energy commodities
NASA Astrophysics Data System (ADS)
Humphreys, H. Brett
This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.
Manns, Braden; McKenzie, Susan Q.; Au, Flora; Gignac, Pamela M.; Geller, Lawrence Ian
2017-01-01
Background: Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. Objective: The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. Design: This study used an analytical model combining Canadian data from various sources. Setting and Patients: This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m2 and those on dialysis. Measurements: We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. Results: We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. Limitations: This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Conclusions: Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs. PMID:28491340
Manns, Braden; McKenzie, Susan Q; Au, Flora; Gignac, Pamela M; Geller, Lawrence Ian
2017-01-01
Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. This study used an analytical model combining Canadian data from various sources. This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m 2 and those on dialysis. We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs.
Implementation of a pharmacist career ladder program.
Heavner, Mojdeh S; Tichy, Eric M; Yazdi, Marina
2016-10-01
The implementation and outcomes of a pharmacist career ladder program (PCLP) at a tertiary care, academic medical center are described. A PCLP was developed at Yale-New Haven Hospital to guide career development, motivate staff to perform beyond their daily tasks and responsibilities, and recognize and retain high performers through professional advancement. The PCLP advancement criteria include specific requirements for excellence in five categories: level of training and experience, pharmacy practice, drug information, education and scholarship, and leadership. The PCLP is designed with four distinct tiers: clinical pharmacist, clinical pharmacist II, clinical pharmacy specialist, and clinical pharmacy specialist II. The specific criteria are increasingly challenging to achieve when moving up the ladder. Pharmacists may apply voluntarily each year for advancement. A PCLP review committee consisting of pharmacist peers and managers meets annually to discuss and vote on career advancement decisions. Since PCLP implementation, we have observed an increasing success rate for advancement (50% in 2013, 85% in 2014, and 100% in 2015) and a considerable increase in pharmacist participation in clinical and process improvement projects, as well as intervention and medication-use variance documentation. The implementation of a PCLP at a tertiary care, academic medical center provided an opportunity for frontline pharmacists to advance professionally and increased their participation and leadership in clinical and process improvement projects and drug-use policy and medication safety initiatives; the program also increased the number of pharmacists with specialty board certification and peer-reviewed publications. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Mystakidou, Kyriaki; Tsilika, Eleni; Parpa, Efi; Athanasouli, Paraskevi; Galanos, Antonis; Anna, Pagoropoulou; Vlahos, Lambros
2009-04-01
The growing interest in the psychological distress in patients with cancer has been the major reason for the conduction of this study. The aims were to assess the relationship of hopelessness, anxiety, distress, and preparatory grief, as well as their predictive power to hopelessness. Ninety-four patients with advanced cancer completed the study at a palliative care unit in Athens, Greece. Beck Hopelessness Scale, the Greek version of the Hospital Anxiety and Depression (HAD) scale, and the Preparatory Grief in Advanced Cancer Patients scale were administered. Information concerning patients' treatment was acquired from the medical records, whereas physicians recorded their clinical condition. Hopelessness correlated significantly with preparatory grief (r = .630, P < .0005), anxiety (r = .539, P < .0005), depression (r = .642, P < .0005), HAD-Total (r = .686, P < .0005), and age (r = -.212, P = .040). Multiple regression analyses showed that preparatory grief (P < .0005), depression (P < .0005), and age (P = .003) were predictors of hopelessness, explaining 58.8% of total variance. In this patient sample, depression, preparatory grief, and patients' age were predictors of hopelessness.
Use of vegetation health data for estimation of aus rice yield in bangladesh.
Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y; Nizamuddin, Mohammad; Goldberg, Mitch
2009-01-01
Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991-2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March-April (weeks 8-13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost.
Use of Vegetation Health Data for Estimation of Aus Rice Yield in Bangladesh
Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y.; Nizamuddin, Mohammad; Goldberg, Mitch
2009-01-01
Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991–2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March–April (weeks 8–13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost. PMID:22574057
Birnbaum, Jeanette; Gadi, Vijayakrishna K; Markowitz, Elan; Etzioni, Ruth
2016-02-16
Mammography trials, which are the primary sources of evidence for screening benefit, were conducted decades ago. Whether advances in systemic therapies have rendered previously observed benefits of screening less significant is unknown. To compare the outcomes of breast cancer screening trials had they been conducted using contemporary systemic treatments with outcomes of trials conducted with previously used treatments. Computer simulation model of 3 virtual screening trials with similar reductions in advanced-stage cancer cases but reflecting treatment patterns in 1975 (prechemotherapy era), 1999, or 2015 (treatment according to receptor status). Meta-analyses of screening and treatment trials; study of dissemination of primary systemic treatments; SEER (Surveillance, Epidemiology, and End Results) registry. U.S. women aged 50 to 74 years. 10 and 25 years. Population. Mammography, chemotherapy, tamoxifen, aromatase inhibitors, and trastuzumab. Breast cancer mortality rate ratio (MRR) and absolute risk reduction (ARR) obtained by the difference in cumulative breast cancer mortality between control and screening groups. At 10 years, screening in a 1975 trial yielded an MRR of 90% and an ARR of 5 deaths per 10,000 women. A 2015 screening trial yielded a 10-year MRR of 90% and an ARR of 3 deaths per 10,000 women. Greater reductions in advanced-stage disease yielded a greater screening effect, but MRRs remained similar across trials. However, ARRs were consistently lower under contemporary treatments. When contemporary treatments were available only for early-stage cases, the MRR was 88%. Disease models simplify reality and cannot capture all breast cancer subtypes. Advances in systemic therapies for breast cancer have not substantively reduced the relative benefits of screening but have likely reduced the absolute benefits because of their positive effect on breast cancer survival. University of Washington and National Cancer Institute.
ERIC Educational Resources Information Center
Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab
2012-01-01
An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…
Yazdanbakhsh, Ahmad Reza; Mohammadi, Amir Sheikh; Alinejad, Abdol Azim; Hassani, Ghasem; Golmohammadi, Sohrab; Mohseni, Seyed Mohsen; Sardar, Mahdieh; Sarsangi, Vali
2016-11-01
The present study evaluates the reduction of antibiotic COD from wastewater by combined coagulation and advanced oxidation processes (AOPS). The reduction of Azithromycin COD by combined coagulation and Fenton-like processes reached a maximum 96.9% at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2of 0.36mM/L and 0.38 mM/L, respectively. Also, 97.9% of Clarithromycin COD reduction, was achieved at a reaction time of 30 min, dosage of ferric chloride 120 mg/L, dosages of Fe0 and H2O2 of 0.3 mM/L and 0.3mM/L, respectively. The results of kinetic studies were best fitted to the pseudo first order equation. The results showed a higher rate constant value for combined coagulation and Fenton-like processes [(kap = 0.022 min-1 and half-life time of 31.5 min for Azithromycin) and (kap = 0.023 min-1 and half-life time of 30.1 min for Clarithromycin)].
Lobelo, Felipe; Kelli, Heval M.; Tejedor, Sheri Chernetsky; Pratt, Michael; McConnell, Michael V.; Martin, Seth S.; Welk, Gregory J.
2017-01-01
Physical activity (PA) interventions constitute a critical component of cardiovascular disease (CVD) risk reduction programs. Objective mobile health (mHealth) software applications (apps) and wearable activity monitors (WAMs) can advance both assessment and integration of PA counseling in clinical settings and support community-based PA interventions. The use of mHealth technology for CVD risk reduction is promising, but integration into routine clinical care and population health management has proven challenging. The increasing diversity of available technologies and the lack of a comprehensive guiding framework are key barriers for standardizing data collection and integration. This paper reviews the validity, utility and feasibility of implementing mHealth technology in clinical settings and proposes an organizational framework to support PA assessment, counseling and referrals to community resources for CVD risk reduction interventions. This integration framework can be adapted to different clinical population needs. It should also be refined as technologies and regulations advance under an evolving health care system landscape in the United States and globally. PMID:26923067
Lobelo, Felipe; Kelli, Heval M; Tejedor, Sheri Chernetsky; Pratt, Michael; McConnell, Michael V; Martin, Seth S; Welk, Gregory J
2016-01-01
Physical activity (PA) interventions constitute a critical component of cardiovascular disease (CVD) risk reduction programs. Objective mobile health (mHealth) software applications (apps) and wearable activity monitors (WAMs) can advance both assessment and integration of PA counseling in clinical settings and support community-based PA interventions. The use of mHealth technology for CVD risk reduction is promising, but integration into routine clinical care and population health management has proven challenging. The increasing diversity of available technologies and the lack of a comprehensive guiding framework are key barriers for standardizing data collection and integration. This paper reviews the validity, utility and feasibility of implementing mHealth technology in clinical settings and proposes an organizational framework to support PA assessment, counseling and referrals to community resources for CVD risk reduction interventions. This integration framework can be adapted to different clinical population needs. It should also be refined as technologies and regulations advance under an evolving health care system landscape in the United States and globally. Copyright © 2016 Elsevier Inc. All rights reserved.
van Oorschot, B; Simon, A
2006-11-01
To analyse and compare the surveys on German doctors and judges on end of life decision making regarding their attitudes on the advance directive and on the dying process. The respondents were to indicate their agreement or disagreement to eight statements on the advance directive and to specify their personal view on the beginning of the dying process. 727 doctors (anaesthetists or intensive-care physicians, internal specialists and general practitioners) in three federal states and 469 judges dealing with guardianship matters all over Germany. Comparisons of means, analyses of variance, pivot tables (chi(2) test) and factor analyses (varimax with Kaiser normalisation). Three attitude groups on advance directive were disclosed by the analysis: the decision model, which emphasises the binding character of a situational advance directive; the deliberation model, which puts more emphasis on the communicative aspect; and the delegation model, which regards the advance directive as a legal instrument. The answers regarding the beginning of the dying process were broadly distributed, but no marked difference was observed between the responding professions. The dying process was assumed by most participants to begin with a life expectancy of only a few days. A high degree of valuation for advance directive was seen in both German doctors and judges; most agreed to the binding character of the situational directive. Regarding the different individual concepts of the dying process, a cross-professional discourse on the contents of this term seems to be overdue.
Kamil, W; Al Habashneh, R; Khader, Y; Al Bayati, L; Taani, D
2011-10-01
Data on whether periodontal therapy affects serum CRP levels are inconclusive. The aim of this study was to determine if nonsurgical periodontal therapy has any effect on CRP and serum lipid levels in patients with advanced periodontitis. Thirty-six systemically healthy patients, ≥ 40 years of age and with advanced periodontitis, were recruited for the study. Patients were randomized consecutively to one of two groups: the treatment group (n = 18) or the control group (n = 18). Treated subjects received nonsurgical periodontal therapy, which included oral hygiene instructions and subgingival scaling and root planing. Systemic levels of inflammatory markers [C-reactive protein (CRP) and the lipid profile] were measured at baseline and 3 mo after periodontal therapy. Nonsurgical periodontal therapy in the treatment group resulted in a significant reduction in the serum CRP level. The average CRP level decreased from 2.3 mg/dL at baseline to 1.8 mg/dL (p < 0.005) after 3 mo of periodontal therapy. The average reduction (95% confidence interval) in CRP was 0.498 (95% confidence interval = 0.265-0.731). In the treatment group, the reduction in CRP was significantly, linearly and directly correlated with the reduction in the plaque index, the gingival index and the percentage of sites with pocket depth ≥ 7 mm (Pearson correlation coefficient = 0.746, 0.425 and 0.621, respectively). Nonsurgical periodontal therapy had no effect on the lipid parameters. This study demonstrated that nonsurgical periodontal therapy results in a significant reduction in the serum CRP level. The effect of this outcome on systemic disease is still unknown. © 2011 John Wiley & Sons A/S.
Chen, Hong; Zhang, Mingmei
2013-08-06
This study aimed at quantifying the concentration and removal of antibiotic resistance genes (ARGs) in three municipal wastewater treatment plants (WWTPs) employing different advanced treatment systems [biological aerated filter, constructed wetland, and ultraviolet (UV) disinfection]. The concentrations of tetM, tetO, tetQ, tetW, sulI, sulII, intI1, and 16S rDNA genes were examined in wastewater and biosolid samples. In municipal WWTPs, ARG reductions of 1-3 orders of magnitude were observed, and no difference was found among the three municipal WWTPs with different treatment processes (p > 0.05). In advanced treatment systems, 1-3 orders of magnitude of reductions in ARGs were observed in constructed wetlands, 0.6-1.2 orders of magnitude of reductions in ARGs were observed in the biological aerated filter, but no apparent decrease by UV disinfection was observed. A significant difference was found between constructed wetlands and biological filter (p < 0.05) and between constructed wetlands and UV disinfection (p < 0.05). In the constructed wetlands, significant correlations were observed in the removal of ARGs and 16S rDNA genes (R(2) = 0.391-0.866; p < 0.05). Constructed wetlands not only have the comparable ARG removal values with WWTP (p > 0.05) but also have the advantage in ARG relative abundance removal, and it should be given priority to be an advanced treatment system for further ARG attenuation from WWTP.
Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando
2007-01-01
The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater.
Advanced supersonic propulsion study, phase 3
NASA Technical Reports Server (NTRS)
Howlett, R. A.; Johnson, J.; Sabatella, J.; Sewall, T.
1976-01-01
The variable stream control engine is determined to be the most promising propulsion system concept for advanced supersonic cruise aircraft. This concept uses variable geometry components and a unique throttle schedule for independent control of two flow streams to provide low jet noise at takeoff and high performance at both subsonic and supersonic cruise. The advanced technology offers a 25% improvement in airplane range and an 8 decibel reduction in takeoff noise, relative to first generation supersonic turbojet engines.
NASA Technical Reports Server (NTRS)
Ripple, William J.
1995-01-01
NOAA-9 satellite data from the Advanced Very High Resolution Radiometer (AVHRR) were used in conjunction with Landsat Multispectral Scanner (MSS) data to determine the proportion of closed canopy conifer forest cover in the Cascade Range of Oregon. A closed canopy conifer map, as determined from the MSS, was registered with AVHRR pixels. Regression was used to relate closed canopy conifer forest cover to AVHRR spectral data. A two-variable (band) regression model accounted for more variance in conifer cover than the Normalized Difference Vegetation Index (NDVI). The spectral signatures of various conifer successional stages were also examined. A map of Oregon was produced showing the proportion of closed canopy conifer cover for each AVHRR pixel. The AVHRR was responsive to both the percentage of closed canopy conifer cover and the successional stage in these temperate coniferous forests in this experiment.
NASA Astrophysics Data System (ADS)
Yuan, Lu; Li, Yao; Li, Hangdao; Lu, Hongyang; Tong, Shanbao
2015-09-01
Rodent middle cerebral artery occlusion (MCAO) model is commonly used in stroke research. Creating a stable infarct volume has always been challenging for technicians due to the variances of animal anatomy and surgical operations. The depth of filament suture advancement strongly influences the infarct volume as well. We investigated the cerebral blood flow (CBF) changes in the affected cortex using laser speckle contrast imaging when advancing suture during MCAO surgery. The relative CBF drop area (CBF50, i.e., the percentage area with CBF less than 50% of the baseline) showed an increase from 20.9% to 69.1% when the insertion depth increased from 1.6 to 1.8 cm. Using the real-time CBF50 marker to guide suture insertion during the surgery, our animal experiments showed that intraoperative CBF-guided surgery could significantly improve the stability of MCAO with a more consistent infarct volume and less mortality.
NASA Astrophysics Data System (ADS)
Yin, Shaohua; Lin, Guo; Li, Shiwei; Peng, Jinhui; Zhang, Libo
2016-09-01
Microwave heating has been applied in the field of drying rare earth carbonates to improve drying efficiency and reduce energy consumption. The effects of power density, material thickness and drying time on the weight reduction (WR) are studied using response surface methodology (RSM). The results show that RSM is feasible to describe the relationship between the independent variables and weight reduction. Based on the analysis of variance (ANOVA), the model is in accordance with the experimental data. The optimum experiment conditions are power density 6 w/g, material thickness 15 mm and drying time 15 min, resulting in an experimental weight reduction of 73%. Comparative experiments show that microwave drying has the advantages of rapid dehydration and energy conservation. Particle analysis shows that the size distribution of rare earth carbonates after microwave drying is more even than those in an oven. Based on these findings, microwave heating technology has an important meaning to energy-saving and improvement of production efficiency for rare earth smelting enterprises and is a green heating process.